00:00:00.001 Started by upstream project "autotest-nightly" build number 3877 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3257 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.123 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.124 The recommended git tool is: git 00:00:00.124 using credential 00000000-0000-0000-0000-000000000002 00:00:00.127 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.211 Fetching changes from the remote Git repository 00:00:00.214 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.244 Using shallow fetch with depth 1 00:00:00.244 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.244 > git --version # timeout=10 00:00:00.268 > git --version # 'git version 2.39.2' 00:00:00.268 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.305 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.305 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.921 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.931 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.942 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:05.942 > git config core.sparsecheckout # timeout=10 00:00:05.951 > git read-tree -mu HEAD # timeout=10 00:00:05.966 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:05.982 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:05.982 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:06.100 [Pipeline] Start of Pipeline 00:00:06.118 [Pipeline] library 00:00:06.120 Loading library shm_lib@master 00:00:06.120 Library shm_lib@master is cached. Copying from home. 00:00:06.141 [Pipeline] node 00:00:21.143 Still waiting to schedule task 00:00:21.144 Waiting for next available executor on ‘vagrant-vm-host’ 00:08:31.973 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:08:31.975 [Pipeline] { 00:08:31.989 [Pipeline] catchError 00:08:31.991 [Pipeline] { 00:08:32.006 [Pipeline] wrap 00:08:32.017 [Pipeline] { 00:08:32.027 [Pipeline] stage 00:08:32.029 [Pipeline] { (Prologue) 00:08:32.052 [Pipeline] echo 00:08:32.054 Node: VM-host-WFP1 00:08:32.063 [Pipeline] cleanWs 00:08:32.072 [WS-CLEANUP] Deleting project workspace... 00:08:32.072 [WS-CLEANUP] Deferred wipeout is used... 00:08:32.079 [WS-CLEANUP] done 00:08:32.275 [Pipeline] setCustomBuildProperty 00:08:32.341 [Pipeline] httpRequest 00:08:32.363 [Pipeline] echo 00:08:32.365 Sorcerer 10.211.164.101 is alive 00:08:32.371 [Pipeline] httpRequest 00:08:32.375 HttpMethod: GET 00:08:32.376 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:08:32.377 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:08:32.377 Response Code: HTTP/1.1 200 OK 00:08:32.378 Success: Status code 200 is in the accepted range: 200,404 00:08:32.378 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:08:32.522 [Pipeline] sh 00:08:32.813 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:08:32.831 [Pipeline] httpRequest 00:08:32.850 [Pipeline] echo 00:08:32.852 Sorcerer 10.211.164.101 is alive 00:08:32.862 [Pipeline] httpRequest 00:08:32.867 HttpMethod: GET 00:08:32.868 URL: http://10.211.164.101/packages/spdk_968224f4625508c0012db59f92f718062c66a8c3.tar.gz 00:08:32.868 Sending request to url: http://10.211.164.101/packages/spdk_968224f4625508c0012db59f92f718062c66a8c3.tar.gz 00:08:32.868 Response Code: HTTP/1.1 200 OK 00:08:32.869 Success: Status code 200 is in the accepted range: 200,404 00:08:32.869 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_968224f4625508c0012db59f92f718062c66a8c3.tar.gz 00:08:35.176 [Pipeline] sh 00:08:35.457 + tar --no-same-owner -xf spdk_968224f4625508c0012db59f92f718062c66a8c3.tar.gz 00:08:38.756 [Pipeline] sh 00:08:39.037 + git -C spdk log --oneline -n5 00:08:39.037 968224f46 app/trace_record: add a optional option '-t' 00:08:39.037 d83ccf437 accel: clarify the usage of spdk_accel_sequence_abort() 00:08:39.037 f282c9958 doc/jsonrpc.md fix style issue 00:08:39.037 868be8ed2 iscs: chap mutual authentication should apply when configured. 00:08:39.037 16b33d51e iscsi: Authenticating discovery based on givven credentials. 00:08:39.057 [Pipeline] writeFile 00:08:39.077 [Pipeline] sh 00:08:39.358 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:08:39.370 [Pipeline] sh 00:08:39.687 + cat autorun-spdk.conf 00:08:39.687 SPDK_RUN_FUNCTIONAL_TEST=1 00:08:39.687 SPDK_TEST_NVME=1 00:08:39.687 SPDK_TEST_FTL=1 00:08:39.687 SPDK_TEST_ISAL=1 00:08:39.687 SPDK_RUN_ASAN=1 00:08:39.687 SPDK_RUN_UBSAN=1 00:08:39.687 SPDK_TEST_XNVME=1 00:08:39.687 SPDK_TEST_NVME_FDP=1 00:08:39.687 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:39.694 RUN_NIGHTLY=1 00:08:39.697 [Pipeline] } 00:08:39.720 [Pipeline] // stage 00:08:39.741 [Pipeline] stage 00:08:39.744 [Pipeline] { (Run VM) 00:08:39.760 [Pipeline] sh 00:08:40.041 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:08:40.041 + echo 'Start stage prepare_nvme.sh' 00:08:40.041 Start stage prepare_nvme.sh 00:08:40.041 + [[ -n 7 ]] 00:08:40.041 + disk_prefix=ex7 00:08:40.041 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:08:40.041 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:08:40.041 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:08:40.041 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:40.041 ++ SPDK_TEST_NVME=1 00:08:40.041 ++ SPDK_TEST_FTL=1 00:08:40.041 ++ SPDK_TEST_ISAL=1 00:08:40.041 ++ SPDK_RUN_ASAN=1 00:08:40.041 ++ SPDK_RUN_UBSAN=1 00:08:40.041 ++ SPDK_TEST_XNVME=1 00:08:40.042 ++ SPDK_TEST_NVME_FDP=1 00:08:40.042 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:40.042 ++ RUN_NIGHTLY=1 00:08:40.042 + cd /var/jenkins/workspace/nvme-vg-autotest 00:08:40.042 + nvme_files=() 00:08:40.042 + declare -A nvme_files 00:08:40.042 + backend_dir=/var/lib/libvirt/images/backends 00:08:40.042 + nvme_files['nvme.img']=5G 00:08:40.042 + nvme_files['nvme-cmb.img']=5G 00:08:40.042 + nvme_files['nvme-multi0.img']=4G 00:08:40.042 + nvme_files['nvme-multi1.img']=4G 00:08:40.042 + nvme_files['nvme-multi2.img']=4G 00:08:40.042 + nvme_files['nvme-openstack.img']=8G 00:08:40.042 + nvme_files['nvme-zns.img']=5G 00:08:40.042 + (( SPDK_TEST_NVME_PMR == 1 )) 00:08:40.042 + (( SPDK_TEST_FTL == 1 )) 00:08:40.042 + nvme_files["nvme-ftl.img"]=6G 00:08:40.042 + (( SPDK_TEST_NVME_FDP == 1 )) 00:08:40.042 + nvme_files["nvme-fdp.img"]=1G 00:08:40.042 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:08:40.042 + for nvme in "${!nvme_files[@]}" 00:08:40.042 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:08:40.042 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:08:40.042 + for nvme in "${!nvme_files[@]}" 00:08:40.042 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:08:40.042 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:08:40.042 + for nvme in "${!nvme_files[@]}" 00:08:40.042 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:08:40.042 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:08:40.042 + for nvme in "${!nvme_files[@]}" 00:08:40.042 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:08:40.042 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:08:40.042 + for nvme in "${!nvme_files[@]}" 00:08:40.042 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:08:40.042 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:08:40.042 + for nvme in "${!nvme_files[@]}" 00:08:40.042 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:08:40.299 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:08:40.299 + for nvme in "${!nvme_files[@]}" 00:08:40.299 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:08:40.299 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:08:40.299 + for nvme in "${!nvme_files[@]}" 00:08:40.299 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:08:40.299 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:08:40.299 + for nvme in "${!nvme_files[@]}" 00:08:40.299 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:08:40.299 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:08:40.299 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:08:40.299 + echo 'End stage prepare_nvme.sh' 00:08:40.299 End stage prepare_nvme.sh 00:08:40.310 [Pipeline] sh 00:08:40.588 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:08:40.588 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:08:40.588 00:08:40.588 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:08:40.588 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:08:40.588 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:08:40.588 HELP=0 00:08:40.588 DRY_RUN=0 00:08:40.588 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:08:40.588 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:08:40.588 NVME_AUTO_CREATE=0 00:08:40.588 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:08:40.588 NVME_CMB=,,,, 00:08:40.588 NVME_PMR=,,,, 00:08:40.588 NVME_ZNS=,,,, 00:08:40.588 NVME_MS=true,,,, 00:08:40.588 NVME_FDP=,,,on, 00:08:40.588 SPDK_VAGRANT_DISTRO=fedora38 00:08:40.588 SPDK_VAGRANT_VMCPU=10 00:08:40.588 SPDK_VAGRANT_VMRAM=12288 00:08:40.588 SPDK_VAGRANT_PROVIDER=libvirt 00:08:40.588 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:08:40.588 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:08:40.588 SPDK_OPENSTACK_NETWORK=0 00:08:40.588 VAGRANT_PACKAGE_BOX=0 00:08:40.588 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:08:40.588 FORCE_DISTRO=true 00:08:40.588 VAGRANT_BOX_VERSION= 00:08:40.588 EXTRA_VAGRANTFILES= 00:08:40.588 NIC_MODEL=e1000 00:08:40.588 00:08:40.588 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:08:40.588 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:08:43.874 Bringing machine 'default' up with 'libvirt' provider... 00:08:44.809 ==> default: Creating image (snapshot of base box volume). 00:08:44.809 ==> default: Creating domain with the following settings... 00:08:44.809 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720613453_459b31a79a20840aecd7 00:08:44.809 ==> default: -- Domain type: kvm 00:08:44.809 ==> default: -- Cpus: 10 00:08:44.809 ==> default: -- Feature: acpi 00:08:44.809 ==> default: -- Feature: apic 00:08:44.809 ==> default: -- Feature: pae 00:08:44.809 ==> default: -- Memory: 12288M 00:08:44.809 ==> default: -- Memory Backing: hugepages: 00:08:44.809 ==> default: -- Management MAC: 00:08:44.809 ==> default: -- Loader: 00:08:44.809 ==> default: -- Nvram: 00:08:44.809 ==> default: -- Base box: spdk/fedora38 00:08:44.809 ==> default: -- Storage pool: default 00:08:44.809 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720613453_459b31a79a20840aecd7.img (20G) 00:08:44.809 ==> default: -- Volume Cache: default 00:08:44.809 ==> default: -- Kernel: 00:08:44.809 ==> default: -- Initrd: 00:08:44.809 ==> default: -- Graphics Type: vnc 00:08:44.809 ==> default: -- Graphics Port: -1 00:08:44.809 ==> default: -- Graphics IP: 127.0.0.1 00:08:44.809 ==> default: -- Graphics Password: Not defined 00:08:44.809 ==> default: -- Video Type: cirrus 00:08:44.809 ==> default: -- Video VRAM: 9216 00:08:44.809 ==> default: -- Sound Type: 00:08:44.809 ==> default: -- Keymap: en-us 00:08:44.809 ==> default: -- TPM Path: 00:08:44.809 ==> default: -- INPUT: type=mouse, bus=ps2 00:08:44.809 ==> default: -- Command line args: 00:08:44.809 ==> default: -> value=-device, 00:08:44.809 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:08:44.809 ==> default: -> value=-drive, 00:08:44.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:08:44.809 ==> default: -> value=-device, 00:08:44.809 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:08:44.809 ==> default: -> value=-device, 00:08:44.809 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:08:44.809 ==> default: -> value=-drive, 00:08:44.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:08:44.809 ==> default: -> value=-device, 00:08:44.809 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:44.809 ==> default: -> value=-device, 00:08:44.809 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:08:44.809 ==> default: -> value=-drive, 00:08:44.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:08:44.809 ==> default: -> value=-device, 00:08:44.809 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:44.809 ==> default: -> value=-drive, 00:08:44.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:08:44.809 ==> default: -> value=-device, 00:08:44.809 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:44.809 ==> default: -> value=-drive, 00:08:44.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:08:44.809 ==> default: -> value=-device, 00:08:44.809 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:44.809 ==> default: -> value=-device, 00:08:44.809 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:08:44.809 ==> default: -> value=-device, 00:08:44.809 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:08:44.809 ==> default: -> value=-drive, 00:08:44.809 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:08:44.809 ==> default: -> value=-device, 00:08:44.809 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:45.378 ==> default: Creating shared folders metadata... 00:08:45.378 ==> default: Starting domain. 00:08:47.363 ==> default: Waiting for domain to get an IP address... 00:09:05.442 ==> default: Waiting for SSH to become available... 00:09:05.442 ==> default: Configuring and enabling network interfaces... 00:09:10.766 default: SSH address: 192.168.121.38:22 00:09:10.766 default: SSH username: vagrant 00:09:10.766 default: SSH auth method: private key 00:09:13.297 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:09:23.271 ==> default: Mounting SSHFS shared folder... 00:09:24.698 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:09:24.698 ==> default: Checking Mount.. 00:09:26.707 ==> default: Folder Successfully Mounted! 00:09:26.708 ==> default: Running provisioner: file... 00:09:27.640 default: ~/.gitconfig => .gitconfig 00:09:28.207 00:09:28.208 SUCCESS! 00:09:28.208 00:09:28.208 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:09:28.208 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:09:28.208 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:09:28.208 00:09:28.216 [Pipeline] } 00:09:28.235 [Pipeline] // stage 00:09:28.244 [Pipeline] dir 00:09:28.245 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:09:28.246 [Pipeline] { 00:09:28.260 [Pipeline] catchError 00:09:28.262 [Pipeline] { 00:09:28.276 [Pipeline] sh 00:09:28.553 + vagrant ssh-config --host vagrant 00:09:28.553 + sed -ne /^Host/,$p 00:09:28.553 + tee ssh_conf 00:09:31.838 Host vagrant 00:09:31.838 HostName 192.168.121.38 00:09:31.838 User vagrant 00:09:31.838 Port 22 00:09:31.838 UserKnownHostsFile /dev/null 00:09:31.838 StrictHostKeyChecking no 00:09:31.838 PasswordAuthentication no 00:09:31.838 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:09:31.838 IdentitiesOnly yes 00:09:31.838 LogLevel FATAL 00:09:31.838 ForwardAgent yes 00:09:31.838 ForwardX11 yes 00:09:31.838 00:09:31.854 [Pipeline] withEnv 00:09:31.857 [Pipeline] { 00:09:31.874 [Pipeline] sh 00:09:32.157 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:09:32.157 source /etc/os-release 00:09:32.157 [[ -e /image.version ]] && img=$(< /image.version) 00:09:32.157 # Minimal, systemd-like check. 00:09:32.157 if [[ -e /.dockerenv ]]; then 00:09:32.157 # Clear garbage from the node's name: 00:09:32.157 # agt-er_autotest_547-896 -> autotest_547-896 00:09:32.157 # $HOSTNAME is the actual container id 00:09:32.157 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:09:32.157 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:09:32.157 # We can assume this is a mount from a host where container is running, 00:09:32.157 # so fetch its hostname to easily identify the target swarm worker. 00:09:32.157 container="$(< /etc/hostname) ($agent)" 00:09:32.157 else 00:09:32.157 # Fallback 00:09:32.157 container=$agent 00:09:32.157 fi 00:09:32.157 fi 00:09:32.157 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:09:32.157 00:09:32.427 [Pipeline] } 00:09:32.448 [Pipeline] // withEnv 00:09:32.458 [Pipeline] setCustomBuildProperty 00:09:32.475 [Pipeline] stage 00:09:32.477 [Pipeline] { (Tests) 00:09:32.499 [Pipeline] sh 00:09:32.780 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:09:33.055 [Pipeline] sh 00:09:33.335 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:09:33.629 [Pipeline] timeout 00:09:33.629 Timeout set to expire in 40 min 00:09:33.631 [Pipeline] { 00:09:33.642 [Pipeline] sh 00:09:33.920 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:09:34.511 HEAD is now at 968224f46 app/trace_record: add a optional option '-t' 00:09:34.526 [Pipeline] sh 00:09:34.801 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:09:35.073 [Pipeline] sh 00:09:35.398 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:09:35.672 [Pipeline] sh 00:09:35.949 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:09:36.207 ++ readlink -f spdk_repo 00:09:36.207 + DIR_ROOT=/home/vagrant/spdk_repo 00:09:36.207 + [[ -n /home/vagrant/spdk_repo ]] 00:09:36.207 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:09:36.207 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:09:36.207 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:09:36.207 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:09:36.207 + [[ -d /home/vagrant/spdk_repo/output ]] 00:09:36.207 + [[ nvme-vg-autotest == pkgdep-* ]] 00:09:36.207 + cd /home/vagrant/spdk_repo 00:09:36.207 + source /etc/os-release 00:09:36.207 ++ NAME='Fedora Linux' 00:09:36.207 ++ VERSION='38 (Cloud Edition)' 00:09:36.207 ++ ID=fedora 00:09:36.207 ++ VERSION_ID=38 00:09:36.207 ++ VERSION_CODENAME= 00:09:36.207 ++ PLATFORM_ID=platform:f38 00:09:36.207 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:09:36.207 ++ ANSI_COLOR='0;38;2;60;110;180' 00:09:36.207 ++ LOGO=fedora-logo-icon 00:09:36.207 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:09:36.207 ++ HOME_URL=https://fedoraproject.org/ 00:09:36.207 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:09:36.207 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:09:36.207 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:09:36.207 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:09:36.207 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:09:36.207 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:09:36.207 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:09:36.207 ++ SUPPORT_END=2024-05-14 00:09:36.207 ++ VARIANT='Cloud Edition' 00:09:36.207 ++ VARIANT_ID=cloud 00:09:36.207 + uname -a 00:09:36.207 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:09:36.207 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:36.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.031 Hugepages 00:09:37.032 node hugesize free / total 00:09:37.032 node0 1048576kB 0 / 0 00:09:37.032 node0 2048kB 0 / 0 00:09:37.032 00:09:37.032 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:37.032 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:37.032 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:37.032 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:09:37.032 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:09:37.032 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:09:37.032 + rm -f /tmp/spdk-ld-path 00:09:37.032 + source autorun-spdk.conf 00:09:37.032 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:09:37.032 ++ SPDK_TEST_NVME=1 00:09:37.032 ++ SPDK_TEST_FTL=1 00:09:37.032 ++ SPDK_TEST_ISAL=1 00:09:37.032 ++ SPDK_RUN_ASAN=1 00:09:37.032 ++ SPDK_RUN_UBSAN=1 00:09:37.032 ++ SPDK_TEST_XNVME=1 00:09:37.032 ++ SPDK_TEST_NVME_FDP=1 00:09:37.032 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:37.032 ++ RUN_NIGHTLY=1 00:09:37.032 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:09:37.032 + [[ -n '' ]] 00:09:37.032 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:09:37.290 + for M in /var/spdk/build-*-manifest.txt 00:09:37.290 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:09:37.290 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:09:37.290 + for M in /var/spdk/build-*-manifest.txt 00:09:37.290 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:09:37.290 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:09:37.290 ++ uname 00:09:37.290 + [[ Linux == \L\i\n\u\x ]] 00:09:37.290 + sudo dmesg -T 00:09:37.290 + sudo dmesg --clear 00:09:37.290 + dmesg_pid=5141 00:09:37.290 + [[ Fedora Linux == FreeBSD ]] 00:09:37.290 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:37.290 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:37.290 + sudo dmesg -Tw 00:09:37.290 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:09:37.290 + [[ -x /usr/src/fio-static/fio ]] 00:09:37.290 + export FIO_BIN=/usr/src/fio-static/fio 00:09:37.290 + FIO_BIN=/usr/src/fio-static/fio 00:09:37.290 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:09:37.290 + [[ ! -v VFIO_QEMU_BIN ]] 00:09:37.290 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:09:37.290 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:37.290 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:37.290 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:09:37.290 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:37.290 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:37.290 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:09:37.290 Test configuration: 00:09:37.290 SPDK_RUN_FUNCTIONAL_TEST=1 00:09:37.290 SPDK_TEST_NVME=1 00:09:37.290 SPDK_TEST_FTL=1 00:09:37.290 SPDK_TEST_ISAL=1 00:09:37.290 SPDK_RUN_ASAN=1 00:09:37.290 SPDK_RUN_UBSAN=1 00:09:37.290 SPDK_TEST_XNVME=1 00:09:37.290 SPDK_TEST_NVME_FDP=1 00:09:37.290 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:37.290 RUN_NIGHTLY=1 12:11:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.548 12:11:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:37.548 12:11:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.548 12:11:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.548 12:11:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.548 12:11:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.548 12:11:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.548 12:11:46 -- paths/export.sh@5 -- $ export PATH 00:09:37.548 12:11:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.548 12:11:46 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:09:37.548 12:11:46 -- common/autobuild_common.sh@444 -- $ date +%s 00:09:37.548 12:11:46 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720613506.XXXXXX 00:09:37.548 12:11:46 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720613506.FoP3W6 00:09:37.548 12:11:46 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:09:37.548 12:11:46 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:09:37.548 12:11:46 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:09:37.548 12:11:46 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:09:37.548 12:11:46 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:09:37.548 12:11:46 -- common/autobuild_common.sh@460 -- $ get_config_params 00:09:37.548 12:11:46 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:09:37.548 12:11:46 -- common/autotest_common.sh@10 -- $ set +x 00:09:37.548 12:11:46 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:09:37.548 12:11:46 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:09:37.548 12:11:46 -- pm/common@17 -- $ local monitor 00:09:37.548 12:11:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:37.548 12:11:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:37.548 12:11:46 -- pm/common@25 -- $ sleep 1 00:09:37.548 12:11:46 -- pm/common@21 -- $ date +%s 00:09:37.548 12:11:46 -- pm/common@21 -- $ date +%s 00:09:37.548 12:11:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720613506 00:09:37.548 12:11:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720613506 00:09:37.548 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720613506_collect-vmstat.pm.log 00:09:37.548 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720613506_collect-cpu-load.pm.log 00:09:38.484 12:11:47 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:09:38.484 12:11:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:09:38.484 12:11:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:09:38.484 12:11:47 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:09:38.484 12:11:47 -- spdk/autobuild.sh@16 -- $ date -u 00:09:38.485 Wed Jul 10 12:11:47 PM UTC 2024 00:09:38.485 12:11:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:09:38.485 v24.09-pre-193-g968224f46 00:09:38.485 12:11:47 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:09:38.485 12:11:47 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:09:38.485 12:11:47 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:09:38.485 12:11:47 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:09:38.485 12:11:47 -- common/autotest_common.sh@10 -- $ set +x 00:09:38.485 ************************************ 00:09:38.485 START TEST asan 00:09:38.485 ************************************ 00:09:38.485 using asan 00:09:38.485 12:11:47 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:09:38.485 00:09:38.485 real 0m0.000s 00:09:38.485 user 0m0.000s 00:09:38.485 sys 0m0.000s 00:09:38.485 12:11:47 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:09:38.485 12:11:47 asan -- common/autotest_common.sh@10 -- $ set +x 00:09:38.485 ************************************ 00:09:38.485 END TEST asan 00:09:38.485 ************************************ 00:09:38.485 12:11:47 -- common/autotest_common.sh@1142 -- $ return 0 00:09:38.485 12:11:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:09:38.485 12:11:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:09:38.485 12:11:47 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:09:38.485 12:11:47 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:09:38.485 12:11:47 -- common/autotest_common.sh@10 -- $ set +x 00:09:38.485 ************************************ 00:09:38.485 START TEST ubsan 00:09:38.485 ************************************ 00:09:38.485 using ubsan 00:09:38.485 12:11:47 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:09:38.485 00:09:38.485 real 0m0.000s 00:09:38.485 user 0m0.000s 00:09:38.485 sys 0m0.000s 00:09:38.485 12:11:47 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:09:38.485 ************************************ 00:09:38.485 12:11:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:09:38.485 END TEST ubsan 00:09:38.485 ************************************ 00:09:38.743 12:11:47 -- common/autotest_common.sh@1142 -- $ return 0 00:09:38.743 12:11:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:09:38.743 12:11:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:09:38.743 12:11:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:09:38.743 12:11:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:09:38.743 12:11:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:09:38.743 12:11:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:09:38.743 12:11:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:09:38.743 12:11:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:09:38.743 12:11:48 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:09:38.743 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:38.743 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:39.310 Using 'verbs' RDMA provider 00:09:55.596 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:10:13.685 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:10:13.685 Creating mk/config.mk...done. 00:10:13.685 Creating mk/cc.flags.mk...done. 00:10:13.685 Type 'make' to build. 00:10:13.685 12:12:20 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:10:13.685 12:12:20 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:10:13.685 12:12:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:10:13.685 12:12:20 -- common/autotest_common.sh@10 -- $ set +x 00:10:13.685 ************************************ 00:10:13.685 START TEST make 00:10:13.685 ************************************ 00:10:13.685 12:12:20 make -- common/autotest_common.sh@1123 -- $ make -j10 00:10:13.685 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:10:13.685 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:10:13.685 meson setup builddir \ 00:10:13.685 -Dwith-libaio=enabled \ 00:10:13.685 -Dwith-liburing=enabled \ 00:10:13.685 -Dwith-libvfn=disabled \ 00:10:13.685 -Dwith-spdk=false && \ 00:10:13.685 meson compile -C builddir && \ 00:10:13.685 cd -) 00:10:13.685 make[1]: Nothing to be done for 'all'. 00:10:14.621 The Meson build system 00:10:14.621 Version: 1.3.1 00:10:14.621 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:10:14.621 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:10:14.621 Build type: native build 00:10:14.621 Project name: xnvme 00:10:14.621 Project version: 0.7.3 00:10:14.621 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:10:14.621 C linker for the host machine: cc ld.bfd 2.39-16 00:10:14.621 Host machine cpu family: x86_64 00:10:14.621 Host machine cpu: x86_64 00:10:14.621 Message: host_machine.system: linux 00:10:14.621 Compiler for C supports arguments -Wno-missing-braces: YES 00:10:14.621 Compiler for C supports arguments -Wno-cast-function-type: YES 00:10:14.621 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:10:14.621 Run-time dependency threads found: YES 00:10:14.621 Has header "setupapi.h" : NO 00:10:14.621 Has header "linux/blkzoned.h" : YES 00:10:14.621 Has header "linux/blkzoned.h" : YES (cached) 00:10:14.621 Has header "libaio.h" : YES 00:10:14.621 Library aio found: YES 00:10:14.621 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:10:14.621 Run-time dependency liburing found: YES 2.2 00:10:14.621 Dependency libvfn skipped: feature with-libvfn disabled 00:10:14.621 Run-time dependency appleframeworks found: NO (tried framework) 00:10:14.621 Run-time dependency appleframeworks found: NO (tried framework) 00:10:14.621 Configuring xnvme_config.h using configuration 00:10:14.621 Configuring xnvme.spec using configuration 00:10:14.621 Run-time dependency bash-completion found: YES 2.11 00:10:14.621 Message: Bash-completions: /usr/share/bash-completion/completions 00:10:14.621 Program cp found: YES (/usr/bin/cp) 00:10:14.621 Has header "winsock2.h" : NO 00:10:14.621 Has header "dbghelp.h" : NO 00:10:14.621 Library rpcrt4 found: NO 00:10:14.621 Library rt found: YES 00:10:14.621 Checking for function "clock_gettime" with dependency -lrt: YES 00:10:14.621 Found CMake: /usr/bin/cmake (3.27.7) 00:10:14.621 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:10:14.621 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:10:14.621 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:10:14.621 Build targets in project: 32 00:10:14.621 00:10:14.621 xnvme 0.7.3 00:10:14.621 00:10:14.621 User defined options 00:10:14.621 with-libaio : enabled 00:10:14.621 with-liburing: enabled 00:10:14.621 with-libvfn : disabled 00:10:14.621 with-spdk : false 00:10:14.621 00:10:14.621 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:10:14.879 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:10:14.879 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:10:15.137 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:10:15.137 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:10:15.137 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:10:15.137 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:10:15.137 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:10:15.137 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:10:15.137 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:10:15.137 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:10:15.137 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:10:15.137 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:10:15.137 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:10:15.137 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:10:15.137 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:10:15.137 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:10:15.137 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:10:15.137 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:10:15.137 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:10:15.396 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:10:15.396 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:10:15.396 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:10:15.396 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:10:15.396 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:10:15.396 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:10:15.396 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:10:15.396 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:10:15.396 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:10:15.396 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:10:15.396 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:10:15.396 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:10:15.396 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:10:15.396 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:10:15.396 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:10:15.396 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:10:15.396 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:10:15.396 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:10:15.396 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:10:15.396 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:10:15.396 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:10:15.396 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:10:15.396 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:10:15.396 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:10:15.396 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:10:15.396 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:10:15.396 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:10:15.396 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:10:15.397 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:10:15.397 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:10:15.397 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:10:15.397 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:10:15.397 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:10:15.655 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:10:15.655 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:10:15.655 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:10:15.655 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:10:15.655 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:10:15.655 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:10:15.655 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:10:15.655 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:10:15.655 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:10:15.655 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:10:15.655 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:10:15.655 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:10:15.655 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:10:15.655 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:10:15.655 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:10:15.655 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:10:15.655 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:10:15.655 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:10:15.913 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:10:15.913 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:10:15.913 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:10:15.913 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:10:15.913 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:10:15.913 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:10:15.913 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:10:15.913 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:10:15.913 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:10:15.913 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:10:15.913 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:10:15.913 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:10:15.913 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:10:15.913 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:10:15.913 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:10:15.913 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:10:16.172 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:10:16.172 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:10:16.172 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:10:16.172 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:10:16.172 [90/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:10:16.172 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:10:16.172 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:10:16.172 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:10:16.172 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:10:16.172 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:10:16.172 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:10:16.172 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:10:16.172 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:10:16.172 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:10:16.172 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:10:16.172 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:10:16.172 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:10:16.172 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:10:16.172 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:10:16.172 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:10:16.172 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:10:16.172 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:10:16.172 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:10:16.172 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:10:16.172 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:10:16.431 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:10:16.431 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:10:16.431 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:10:16.431 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:10:16.431 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:10:16.431 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:10:16.431 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:10:16.431 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:10:16.431 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:10:16.431 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:10:16.431 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:10:16.431 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:10:16.431 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:10:16.431 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:10:16.431 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:10:16.431 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:10:16.431 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:10:16.431 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:10:16.431 [129/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:10:16.431 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:10:16.431 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:10:16.431 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:10:16.431 [133/203] Linking target lib/libxnvme.so 00:10:16.431 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:10:16.431 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:10:16.690 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:10:16.690 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:10:16.690 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:10:16.690 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:10:16.690 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:10:16.690 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:10:16.690 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:10:16.690 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:10:16.690 [144/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:10:16.690 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:10:16.690 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:10:16.690 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:10:16.690 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:10:16.690 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:10:16.949 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:10:16.949 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:10:16.949 [152/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:10:16.949 [153/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:10:16.949 [154/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:10:16.949 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:10:16.949 [156/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:10:16.949 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:10:16.949 [158/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:10:16.949 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:10:16.949 [160/203] Compiling C object tools/kvs.p/kvs.c.o 00:10:16.949 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:10:16.949 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:10:16.949 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:10:16.949 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:10:17.207 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:10:17.207 [166/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:10:17.207 [167/203] Compiling C object tools/lblk.p/lblk.c.o 00:10:17.207 [168/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:10:17.208 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:10:17.208 [170/203] Compiling C object tools/zoned.p/zoned.c.o 00:10:17.208 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:10:17.208 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:10:17.466 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:10:17.466 [174/203] Linking static target lib/libxnvme.a 00:10:17.466 [175/203] Linking target tests/xnvme_tests_cli 00:10:17.466 [176/203] Linking target tests/xnvme_tests_lblk 00:10:17.466 [177/203] Linking target tests/xnvme_tests_xnvme_file 00:10:17.466 [178/203] Linking target tests/xnvme_tests_buf 00:10:17.467 [179/203] Linking target tests/xnvme_tests_async_intf 00:10:17.467 [180/203] Linking target tests/xnvme_tests_enum 00:10:17.467 [181/203] Linking target tests/xnvme_tests_ioworker 00:10:17.467 [182/203] Linking target tests/xnvme_tests_znd_explicit_open 00:10:17.467 [183/203] Linking target tests/xnvme_tests_scc 00:10:17.467 [184/203] Linking target tests/xnvme_tests_znd_append 00:10:17.467 [185/203] Linking target tests/xnvme_tests_xnvme_cli 00:10:17.467 [186/203] Linking target tests/xnvme_tests_znd_state 00:10:17.467 [187/203] Linking target tools/xdd 00:10:17.467 [188/203] Linking target tools/lblk 00:10:17.467 [189/203] Linking target tests/xnvme_tests_znd_zrwa 00:10:17.467 [190/203] Linking target tests/xnvme_tests_kvs 00:10:17.467 [191/203] Linking target tools/kvs 00:10:17.467 [192/203] Linking target examples/xnvme_dev 00:10:17.467 [193/203] Linking target tests/xnvme_tests_map 00:10:17.467 [194/203] Linking target examples/xnvme_enum 00:10:17.467 [195/203] Linking target tools/xnvme 00:10:17.467 [196/203] Linking target tools/zoned 00:10:17.467 [197/203] Linking target tools/xnvme_file 00:10:17.467 [198/203] Linking target examples/xnvme_io_async 00:10:17.467 [199/203] Linking target examples/xnvme_hello 00:10:17.467 [200/203] Linking target examples/xnvme_single_async 00:10:17.467 [201/203] Linking target examples/xnvme_single_sync 00:10:17.467 [202/203] Linking target examples/zoned_io_async 00:10:17.467 [203/203] Linking target examples/zoned_io_sync 00:10:17.467 INFO: autodetecting backend as ninja 00:10:17.467 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:10:17.725 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:10:24.339 The Meson build system 00:10:24.339 Version: 1.3.1 00:10:24.339 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:10:24.339 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:10:24.339 Build type: native build 00:10:24.339 Program cat found: YES (/usr/bin/cat) 00:10:24.339 Project name: DPDK 00:10:24.339 Project version: 24.03.0 00:10:24.339 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:10:24.339 C linker for the host machine: cc ld.bfd 2.39-16 00:10:24.339 Host machine cpu family: x86_64 00:10:24.339 Host machine cpu: x86_64 00:10:24.339 Message: ## Building in Developer Mode ## 00:10:24.339 Program pkg-config found: YES (/usr/bin/pkg-config) 00:10:24.339 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:10:24.339 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:10:24.339 Program python3 found: YES (/usr/bin/python3) 00:10:24.339 Program cat found: YES (/usr/bin/cat) 00:10:24.339 Compiler for C supports arguments -march=native: YES 00:10:24.339 Checking for size of "void *" : 8 00:10:24.339 Checking for size of "void *" : 8 (cached) 00:10:24.339 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:10:24.339 Library m found: YES 00:10:24.339 Library numa found: YES 00:10:24.339 Has header "numaif.h" : YES 00:10:24.339 Library fdt found: NO 00:10:24.339 Library execinfo found: NO 00:10:24.339 Has header "execinfo.h" : YES 00:10:24.339 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:10:24.339 Run-time dependency libarchive found: NO (tried pkgconfig) 00:10:24.339 Run-time dependency libbsd found: NO (tried pkgconfig) 00:10:24.339 Run-time dependency jansson found: NO (tried pkgconfig) 00:10:24.339 Run-time dependency openssl found: YES 3.0.9 00:10:24.339 Run-time dependency libpcap found: YES 1.10.4 00:10:24.339 Has header "pcap.h" with dependency libpcap: YES 00:10:24.339 Compiler for C supports arguments -Wcast-qual: YES 00:10:24.339 Compiler for C supports arguments -Wdeprecated: YES 00:10:24.339 Compiler for C supports arguments -Wformat: YES 00:10:24.339 Compiler for C supports arguments -Wformat-nonliteral: NO 00:10:24.339 Compiler for C supports arguments -Wformat-security: NO 00:10:24.339 Compiler for C supports arguments -Wmissing-declarations: YES 00:10:24.339 Compiler for C supports arguments -Wmissing-prototypes: YES 00:10:24.339 Compiler for C supports arguments -Wnested-externs: YES 00:10:24.339 Compiler for C supports arguments -Wold-style-definition: YES 00:10:24.339 Compiler for C supports arguments -Wpointer-arith: YES 00:10:24.339 Compiler for C supports arguments -Wsign-compare: YES 00:10:24.339 Compiler for C supports arguments -Wstrict-prototypes: YES 00:10:24.339 Compiler for C supports arguments -Wundef: YES 00:10:24.339 Compiler for C supports arguments -Wwrite-strings: YES 00:10:24.339 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:10:24.339 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:10:24.339 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:10:24.339 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:10:24.339 Program objdump found: YES (/usr/bin/objdump) 00:10:24.339 Compiler for C supports arguments -mavx512f: YES 00:10:24.339 Checking if "AVX512 checking" compiles: YES 00:10:24.339 Fetching value of define "__SSE4_2__" : 1 00:10:24.339 Fetching value of define "__AES__" : 1 00:10:24.339 Fetching value of define "__AVX__" : 1 00:10:24.339 Fetching value of define "__AVX2__" : 1 00:10:24.339 Fetching value of define "__AVX512BW__" : 1 00:10:24.339 Fetching value of define "__AVX512CD__" : 1 00:10:24.339 Fetching value of define "__AVX512DQ__" : 1 00:10:24.339 Fetching value of define "__AVX512F__" : 1 00:10:24.339 Fetching value of define "__AVX512VL__" : 1 00:10:24.339 Fetching value of define "__PCLMUL__" : 1 00:10:24.339 Fetching value of define "__RDRND__" : 1 00:10:24.339 Fetching value of define "__RDSEED__" : 1 00:10:24.339 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:10:24.339 Fetching value of define "__znver1__" : (undefined) 00:10:24.339 Fetching value of define "__znver2__" : (undefined) 00:10:24.339 Fetching value of define "__znver3__" : (undefined) 00:10:24.339 Fetching value of define "__znver4__" : (undefined) 00:10:24.339 Library asan found: YES 00:10:24.339 Compiler for C supports arguments -Wno-format-truncation: YES 00:10:24.339 Message: lib/log: Defining dependency "log" 00:10:24.339 Message: lib/kvargs: Defining dependency "kvargs" 00:10:24.339 Message: lib/telemetry: Defining dependency "telemetry" 00:10:24.339 Library rt found: YES 00:10:24.339 Checking for function "getentropy" : NO 00:10:24.339 Message: lib/eal: Defining dependency "eal" 00:10:24.339 Message: lib/ring: Defining dependency "ring" 00:10:24.339 Message: lib/rcu: Defining dependency "rcu" 00:10:24.339 Message: lib/mempool: Defining dependency "mempool" 00:10:24.339 Message: lib/mbuf: Defining dependency "mbuf" 00:10:24.339 Fetching value of define "__PCLMUL__" : 1 (cached) 00:10:24.339 Fetching value of define "__AVX512F__" : 1 (cached) 00:10:24.339 Fetching value of define "__AVX512BW__" : 1 (cached) 00:10:24.339 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:10:24.339 Fetching value of define "__AVX512VL__" : 1 (cached) 00:10:24.339 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:10:24.339 Compiler for C supports arguments -mpclmul: YES 00:10:24.339 Compiler for C supports arguments -maes: YES 00:10:24.339 Compiler for C supports arguments -mavx512f: YES (cached) 00:10:24.339 Compiler for C supports arguments -mavx512bw: YES 00:10:24.339 Compiler for C supports arguments -mavx512dq: YES 00:10:24.339 Compiler for C supports arguments -mavx512vl: YES 00:10:24.339 Compiler for C supports arguments -mvpclmulqdq: YES 00:10:24.339 Compiler for C supports arguments -mavx2: YES 00:10:24.339 Compiler for C supports arguments -mavx: YES 00:10:24.339 Message: lib/net: Defining dependency "net" 00:10:24.339 Message: lib/meter: Defining dependency "meter" 00:10:24.339 Message: lib/ethdev: Defining dependency "ethdev" 00:10:24.339 Message: lib/pci: Defining dependency "pci" 00:10:24.339 Message: lib/cmdline: Defining dependency "cmdline" 00:10:24.339 Message: lib/hash: Defining dependency "hash" 00:10:24.339 Message: lib/timer: Defining dependency "timer" 00:10:24.339 Message: lib/compressdev: Defining dependency "compressdev" 00:10:24.339 Message: lib/cryptodev: Defining dependency "cryptodev" 00:10:24.339 Message: lib/dmadev: Defining dependency "dmadev" 00:10:24.339 Compiler for C supports arguments -Wno-cast-qual: YES 00:10:24.339 Message: lib/power: Defining dependency "power" 00:10:24.339 Message: lib/reorder: Defining dependency "reorder" 00:10:24.339 Message: lib/security: Defining dependency "security" 00:10:24.339 Has header "linux/userfaultfd.h" : YES 00:10:24.339 Has header "linux/vduse.h" : YES 00:10:24.339 Message: lib/vhost: Defining dependency "vhost" 00:10:24.339 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:10:24.339 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:10:24.339 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:10:24.339 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:10:24.339 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:10:24.339 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:10:24.339 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:10:24.339 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:10:24.339 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:10:24.339 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:10:24.339 Program doxygen found: YES (/usr/bin/doxygen) 00:10:24.339 Configuring doxy-api-html.conf using configuration 00:10:24.339 Configuring doxy-api-man.conf using configuration 00:10:24.339 Program mandb found: YES (/usr/bin/mandb) 00:10:24.339 Program sphinx-build found: NO 00:10:24.339 Configuring rte_build_config.h using configuration 00:10:24.339 Message: 00:10:24.339 ================= 00:10:24.339 Applications Enabled 00:10:24.339 ================= 00:10:24.339 00:10:24.339 apps: 00:10:24.339 00:10:24.339 00:10:24.339 Message: 00:10:24.339 ================= 00:10:24.339 Libraries Enabled 00:10:24.339 ================= 00:10:24.339 00:10:24.339 libs: 00:10:24.339 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:10:24.339 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:10:24.339 cryptodev, dmadev, power, reorder, security, vhost, 00:10:24.339 00:10:24.339 Message: 00:10:24.339 =============== 00:10:24.339 Drivers Enabled 00:10:24.339 =============== 00:10:24.339 00:10:24.339 common: 00:10:24.339 00:10:24.339 bus: 00:10:24.339 pci, vdev, 00:10:24.339 mempool: 00:10:24.339 ring, 00:10:24.339 dma: 00:10:24.339 00:10:24.339 net: 00:10:24.339 00:10:24.339 crypto: 00:10:24.339 00:10:24.339 compress: 00:10:24.339 00:10:24.339 vdpa: 00:10:24.339 00:10:24.339 00:10:24.339 Message: 00:10:24.339 ================= 00:10:24.339 Content Skipped 00:10:24.339 ================= 00:10:24.339 00:10:24.339 apps: 00:10:24.339 dumpcap: explicitly disabled via build config 00:10:24.340 graph: explicitly disabled via build config 00:10:24.340 pdump: explicitly disabled via build config 00:10:24.340 proc-info: explicitly disabled via build config 00:10:24.340 test-acl: explicitly disabled via build config 00:10:24.340 test-bbdev: explicitly disabled via build config 00:10:24.340 test-cmdline: explicitly disabled via build config 00:10:24.340 test-compress-perf: explicitly disabled via build config 00:10:24.340 test-crypto-perf: explicitly disabled via build config 00:10:24.340 test-dma-perf: explicitly disabled via build config 00:10:24.340 test-eventdev: explicitly disabled via build config 00:10:24.340 test-fib: explicitly disabled via build config 00:10:24.340 test-flow-perf: explicitly disabled via build config 00:10:24.340 test-gpudev: explicitly disabled via build config 00:10:24.340 test-mldev: explicitly disabled via build config 00:10:24.340 test-pipeline: explicitly disabled via build config 00:10:24.340 test-pmd: explicitly disabled via build config 00:10:24.340 test-regex: explicitly disabled via build config 00:10:24.340 test-sad: explicitly disabled via build config 00:10:24.340 test-security-perf: explicitly disabled via build config 00:10:24.340 00:10:24.340 libs: 00:10:24.340 argparse: explicitly disabled via build config 00:10:24.340 metrics: explicitly disabled via build config 00:10:24.340 acl: explicitly disabled via build config 00:10:24.340 bbdev: explicitly disabled via build config 00:10:24.340 bitratestats: explicitly disabled via build config 00:10:24.340 bpf: explicitly disabled via build config 00:10:24.340 cfgfile: explicitly disabled via build config 00:10:24.340 distributor: explicitly disabled via build config 00:10:24.340 efd: explicitly disabled via build config 00:10:24.340 eventdev: explicitly disabled via build config 00:10:24.340 dispatcher: explicitly disabled via build config 00:10:24.340 gpudev: explicitly disabled via build config 00:10:24.340 gro: explicitly disabled via build config 00:10:24.340 gso: explicitly disabled via build config 00:10:24.340 ip_frag: explicitly disabled via build config 00:10:24.340 jobstats: explicitly disabled via build config 00:10:24.340 latencystats: explicitly disabled via build config 00:10:24.340 lpm: explicitly disabled via build config 00:10:24.340 member: explicitly disabled via build config 00:10:24.340 pcapng: explicitly disabled via build config 00:10:24.340 rawdev: explicitly disabled via build config 00:10:24.340 regexdev: explicitly disabled via build config 00:10:24.340 mldev: explicitly disabled via build config 00:10:24.340 rib: explicitly disabled via build config 00:10:24.340 sched: explicitly disabled via build config 00:10:24.340 stack: explicitly disabled via build config 00:10:24.340 ipsec: explicitly disabled via build config 00:10:24.340 pdcp: explicitly disabled via build config 00:10:24.340 fib: explicitly disabled via build config 00:10:24.340 port: explicitly disabled via build config 00:10:24.340 pdump: explicitly disabled via build config 00:10:24.340 table: explicitly disabled via build config 00:10:24.340 pipeline: explicitly disabled via build config 00:10:24.340 graph: explicitly disabled via build config 00:10:24.340 node: explicitly disabled via build config 00:10:24.340 00:10:24.340 drivers: 00:10:24.340 common/cpt: not in enabled drivers build config 00:10:24.340 common/dpaax: not in enabled drivers build config 00:10:24.340 common/iavf: not in enabled drivers build config 00:10:24.340 common/idpf: not in enabled drivers build config 00:10:24.340 common/ionic: not in enabled drivers build config 00:10:24.340 common/mvep: not in enabled drivers build config 00:10:24.340 common/octeontx: not in enabled drivers build config 00:10:24.340 bus/auxiliary: not in enabled drivers build config 00:10:24.340 bus/cdx: not in enabled drivers build config 00:10:24.340 bus/dpaa: not in enabled drivers build config 00:10:24.340 bus/fslmc: not in enabled drivers build config 00:10:24.340 bus/ifpga: not in enabled drivers build config 00:10:24.340 bus/platform: not in enabled drivers build config 00:10:24.340 bus/uacce: not in enabled drivers build config 00:10:24.340 bus/vmbus: not in enabled drivers build config 00:10:24.340 common/cnxk: not in enabled drivers build config 00:10:24.340 common/mlx5: not in enabled drivers build config 00:10:24.340 common/nfp: not in enabled drivers build config 00:10:24.340 common/nitrox: not in enabled drivers build config 00:10:24.340 common/qat: not in enabled drivers build config 00:10:24.340 common/sfc_efx: not in enabled drivers build config 00:10:24.340 mempool/bucket: not in enabled drivers build config 00:10:24.340 mempool/cnxk: not in enabled drivers build config 00:10:24.340 mempool/dpaa: not in enabled drivers build config 00:10:24.340 mempool/dpaa2: not in enabled drivers build config 00:10:24.340 mempool/octeontx: not in enabled drivers build config 00:10:24.340 mempool/stack: not in enabled drivers build config 00:10:24.340 dma/cnxk: not in enabled drivers build config 00:10:24.340 dma/dpaa: not in enabled drivers build config 00:10:24.340 dma/dpaa2: not in enabled drivers build config 00:10:24.340 dma/hisilicon: not in enabled drivers build config 00:10:24.340 dma/idxd: not in enabled drivers build config 00:10:24.340 dma/ioat: not in enabled drivers build config 00:10:24.340 dma/skeleton: not in enabled drivers build config 00:10:24.340 net/af_packet: not in enabled drivers build config 00:10:24.340 net/af_xdp: not in enabled drivers build config 00:10:24.340 net/ark: not in enabled drivers build config 00:10:24.340 net/atlantic: not in enabled drivers build config 00:10:24.340 net/avp: not in enabled drivers build config 00:10:24.340 net/axgbe: not in enabled drivers build config 00:10:24.340 net/bnx2x: not in enabled drivers build config 00:10:24.340 net/bnxt: not in enabled drivers build config 00:10:24.340 net/bonding: not in enabled drivers build config 00:10:24.340 net/cnxk: not in enabled drivers build config 00:10:24.340 net/cpfl: not in enabled drivers build config 00:10:24.340 net/cxgbe: not in enabled drivers build config 00:10:24.340 net/dpaa: not in enabled drivers build config 00:10:24.340 net/dpaa2: not in enabled drivers build config 00:10:24.340 net/e1000: not in enabled drivers build config 00:10:24.340 net/ena: not in enabled drivers build config 00:10:24.340 net/enetc: not in enabled drivers build config 00:10:24.340 net/enetfec: not in enabled drivers build config 00:10:24.340 net/enic: not in enabled drivers build config 00:10:24.340 net/failsafe: not in enabled drivers build config 00:10:24.340 net/fm10k: not in enabled drivers build config 00:10:24.340 net/gve: not in enabled drivers build config 00:10:24.340 net/hinic: not in enabled drivers build config 00:10:24.340 net/hns3: not in enabled drivers build config 00:10:24.340 net/i40e: not in enabled drivers build config 00:10:24.340 net/iavf: not in enabled drivers build config 00:10:24.340 net/ice: not in enabled drivers build config 00:10:24.340 net/idpf: not in enabled drivers build config 00:10:24.340 net/igc: not in enabled drivers build config 00:10:24.340 net/ionic: not in enabled drivers build config 00:10:24.340 net/ipn3ke: not in enabled drivers build config 00:10:24.340 net/ixgbe: not in enabled drivers build config 00:10:24.340 net/mana: not in enabled drivers build config 00:10:24.340 net/memif: not in enabled drivers build config 00:10:24.340 net/mlx4: not in enabled drivers build config 00:10:24.340 net/mlx5: not in enabled drivers build config 00:10:24.340 net/mvneta: not in enabled drivers build config 00:10:24.340 net/mvpp2: not in enabled drivers build config 00:10:24.340 net/netvsc: not in enabled drivers build config 00:10:24.340 net/nfb: not in enabled drivers build config 00:10:24.340 net/nfp: not in enabled drivers build config 00:10:24.340 net/ngbe: not in enabled drivers build config 00:10:24.340 net/null: not in enabled drivers build config 00:10:24.340 net/octeontx: not in enabled drivers build config 00:10:24.340 net/octeon_ep: not in enabled drivers build config 00:10:24.340 net/pcap: not in enabled drivers build config 00:10:24.340 net/pfe: not in enabled drivers build config 00:10:24.340 net/qede: not in enabled drivers build config 00:10:24.340 net/ring: not in enabled drivers build config 00:10:24.340 net/sfc: not in enabled drivers build config 00:10:24.340 net/softnic: not in enabled drivers build config 00:10:24.340 net/tap: not in enabled drivers build config 00:10:24.340 net/thunderx: not in enabled drivers build config 00:10:24.340 net/txgbe: not in enabled drivers build config 00:10:24.340 net/vdev_netvsc: not in enabled drivers build config 00:10:24.340 net/vhost: not in enabled drivers build config 00:10:24.340 net/virtio: not in enabled drivers build config 00:10:24.340 net/vmxnet3: not in enabled drivers build config 00:10:24.340 raw/*: missing internal dependency, "rawdev" 00:10:24.340 crypto/armv8: not in enabled drivers build config 00:10:24.340 crypto/bcmfs: not in enabled drivers build config 00:10:24.340 crypto/caam_jr: not in enabled drivers build config 00:10:24.340 crypto/ccp: not in enabled drivers build config 00:10:24.340 crypto/cnxk: not in enabled drivers build config 00:10:24.340 crypto/dpaa_sec: not in enabled drivers build config 00:10:24.340 crypto/dpaa2_sec: not in enabled drivers build config 00:10:24.340 crypto/ipsec_mb: not in enabled drivers build config 00:10:24.340 crypto/mlx5: not in enabled drivers build config 00:10:24.340 crypto/mvsam: not in enabled drivers build config 00:10:24.340 crypto/nitrox: not in enabled drivers build config 00:10:24.340 crypto/null: not in enabled drivers build config 00:10:24.340 crypto/octeontx: not in enabled drivers build config 00:10:24.340 crypto/openssl: not in enabled drivers build config 00:10:24.340 crypto/scheduler: not in enabled drivers build config 00:10:24.340 crypto/uadk: not in enabled drivers build config 00:10:24.340 crypto/virtio: not in enabled drivers build config 00:10:24.340 compress/isal: not in enabled drivers build config 00:10:24.340 compress/mlx5: not in enabled drivers build config 00:10:24.340 compress/nitrox: not in enabled drivers build config 00:10:24.340 compress/octeontx: not in enabled drivers build config 00:10:24.340 compress/zlib: not in enabled drivers build config 00:10:24.340 regex/*: missing internal dependency, "regexdev" 00:10:24.340 ml/*: missing internal dependency, "mldev" 00:10:24.340 vdpa/ifc: not in enabled drivers build config 00:10:24.340 vdpa/mlx5: not in enabled drivers build config 00:10:24.340 vdpa/nfp: not in enabled drivers build config 00:10:24.340 vdpa/sfc: not in enabled drivers build config 00:10:24.340 event/*: missing internal dependency, "eventdev" 00:10:24.340 baseband/*: missing internal dependency, "bbdev" 00:10:24.340 gpu/*: missing internal dependency, "gpudev" 00:10:24.341 00:10:24.341 00:10:24.341 Build targets in project: 85 00:10:24.341 00:10:24.341 DPDK 24.03.0 00:10:24.341 00:10:24.341 User defined options 00:10:24.341 buildtype : debug 00:10:24.341 default_library : shared 00:10:24.341 libdir : lib 00:10:24.341 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:24.341 b_sanitize : address 00:10:24.341 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:10:24.341 c_link_args : 00:10:24.341 cpu_instruction_set: native 00:10:24.341 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:10:24.341 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:10:24.341 enable_docs : false 00:10:24.341 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:10:24.341 enable_kmods : false 00:10:24.341 max_lcores : 128 00:10:24.341 tests : false 00:10:24.341 00:10:24.341 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:10:24.341 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:10:24.341 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:10:24.341 [2/268] Linking static target lib/librte_kvargs.a 00:10:24.341 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:10:24.341 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:10:24.341 [5/268] Linking static target lib/librte_log.a 00:10:24.341 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:10:24.600 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:10:24.860 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:10:24.860 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:10:24.860 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:10:24.860 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:10:24.860 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:10:24.860 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:10:24.860 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:10:24.860 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:10:24.860 [16/268] Linking static target lib/librte_telemetry.a 00:10:24.860 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:10:25.119 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:10:25.378 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:10:25.378 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:10:25.378 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:10:25.378 [22/268] Linking target lib/librte_log.so.24.1 00:10:25.378 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:10:25.378 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:10:25.378 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:10:25.378 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:10:25.638 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:10:25.638 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:10:25.638 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:10:25.638 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:10:25.638 [31/268] Linking target lib/librte_kvargs.so.24.1 00:10:25.638 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:10:25.897 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:10:25.897 [34/268] Linking target lib/librte_telemetry.so.24.1 00:10:25.897 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:10:25.897 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:10:26.156 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:10:26.156 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:10:26.156 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:10:26.156 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:10:26.156 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:10:26.156 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:10:26.156 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:10:26.156 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:10:26.156 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:10:26.156 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:10:26.415 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:10:26.415 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:10:26.674 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:10:26.674 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:10:26.674 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:10:26.674 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:10:26.933 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:10:26.933 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:10:26.933 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:10:26.933 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:10:26.933 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:10:26.933 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:10:26.933 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:10:27.192 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:10:27.192 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:10:27.192 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:10:27.451 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:10:27.451 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:10:27.451 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:10:27.451 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:10:27.451 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:10:27.451 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:10:27.711 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:10:27.711 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:10:27.970 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:10:27.970 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:10:27.970 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:10:27.970 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:10:27.970 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:10:27.970 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:10:27.970 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:10:28.228 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:10:28.228 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:10:28.228 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:10:28.486 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:10:28.486 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:10:28.486 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:10:28.486 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:10:28.744 [85/268] Linking static target lib/librte_eal.a 00:10:28.744 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:10:28.744 [87/268] Linking static target lib/librte_ring.a 00:10:28.744 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:10:29.002 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:10:29.002 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:10:29.002 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:10:29.002 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:10:29.002 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:10:29.002 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:10:29.002 [95/268] Linking static target lib/librte_mempool.a 00:10:29.002 [96/268] Linking static target lib/librte_rcu.a 00:10:29.261 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:10:29.261 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:10:29.519 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:10:29.519 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:10:29.519 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:10:29.519 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:10:29.778 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:10:29.778 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:10:29.778 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:10:29.778 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:10:29.778 [107/268] Linking static target lib/librte_net.a 00:10:30.036 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:10:30.036 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:10:30.036 [110/268] Linking static target lib/librte_mbuf.a 00:10:30.036 [111/268] Linking static target lib/librte_meter.a 00:10:30.036 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:10:30.036 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:10:30.036 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:10:30.294 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:10:30.294 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:10:30.553 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:10:30.553 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:10:30.553 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:10:30.553 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:10:30.811 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:10:31.069 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:10:31.069 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:10:31.069 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:10:31.326 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:10:31.326 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:10:31.326 [127/268] Linking static target lib/librte_pci.a 00:10:31.326 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:10:31.326 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:10:31.326 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:10:31.326 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:10:31.582 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:10:31.582 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:10:31.582 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:10:31.582 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:10:31.582 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:10:31.582 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:10:31.840 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:10:31.840 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:10:31.840 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:10:31.840 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:10:31.840 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:10:31.840 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:10:31.840 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:10:31.840 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:10:32.097 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:10:32.097 [147/268] Linking static target lib/librte_cmdline.a 00:10:32.097 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:10:32.097 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:10:32.097 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:10:32.354 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:10:32.612 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:10:32.612 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:10:32.612 [154/268] Linking static target lib/librte_timer.a 00:10:32.612 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:10:32.869 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:10:32.869 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:10:32.869 [158/268] Linking static target lib/librte_compressdev.a 00:10:32.869 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:10:32.869 [160/268] Linking static target lib/librte_ethdev.a 00:10:32.869 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:10:32.869 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:10:33.127 [163/268] Linking static target lib/librte_hash.a 00:10:33.127 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:10:33.385 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:10:33.385 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:10:33.385 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:10:33.385 [168/268] Linking static target lib/librte_dmadev.a 00:10:33.385 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:10:33.385 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:10:33.643 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:10:33.644 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:33.644 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:10:33.644 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:10:33.902 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:10:33.902 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:10:34.161 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:10:34.161 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:10:34.161 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:10:34.161 [180/268] Linking static target lib/librte_cryptodev.a 00:10:34.161 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:34.161 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:10:34.161 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:10:34.161 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:10:34.419 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:10:34.419 [186/268] Linking static target lib/librte_power.a 00:10:34.678 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:10:34.678 [188/268] Linking static target lib/librte_reorder.a 00:10:34.678 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:10:34.678 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:10:34.678 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:10:34.936 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:10:34.936 [193/268] Linking static target lib/librte_security.a 00:10:34.936 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:10:35.194 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:10:35.452 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:10:35.452 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:10:35.452 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:10:35.452 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:10:35.452 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:10:35.452 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:10:35.710 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:10:35.969 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:10:35.969 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:10:35.969 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:10:35.969 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:10:35.969 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:10:35.969 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:10:35.969 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:36.227 [210/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:10:36.227 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:10:36.227 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:10:36.227 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:10:36.227 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:10:36.227 [215/268] Linking static target drivers/librte_bus_pci.a 00:10:36.484 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:10:36.484 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:10:36.484 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:10:36.484 [219/268] Linking static target drivers/librte_bus_vdev.a 00:10:36.484 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:10:36.484 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:10:36.743 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:10:36.743 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:36.743 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:10:36.743 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:10:36.743 [226/268] Linking static target drivers/librte_mempool_ring.a 00:10:36.743 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:10:37.719 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:10:41.903 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:10:41.903 [230/268] Linking static target lib/librte_vhost.a 00:10:41.903 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:10:41.903 [232/268] Linking target lib/librte_eal.so.24.1 00:10:41.903 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:10:41.903 [234/268] Linking target lib/librte_pci.so.24.1 00:10:41.903 [235/268] Linking target lib/librte_ring.so.24.1 00:10:41.903 [236/268] Linking target lib/librte_timer.so.24.1 00:10:41.903 [237/268] Linking target lib/librte_meter.so.24.1 00:10:41.903 [238/268] Linking target lib/librte_dmadev.so.24.1 00:10:41.903 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:10:41.903 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:10:41.903 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:10:41.903 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:10:41.903 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:10:41.903 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:10:41.903 [245/268] Linking target lib/librte_rcu.so.24.1 00:10:41.903 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:10:41.903 [247/268] Linking target lib/librte_mempool.so.24.1 00:10:42.162 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:10:42.162 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:10:42.162 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:10:42.162 [251/268] Linking target lib/librte_mbuf.so.24.1 00:10:42.162 [252/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:42.420 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:10:42.420 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:10:42.420 [255/268] Linking target lib/librte_net.so.24.1 00:10:42.420 [256/268] Linking target lib/librte_reorder.so.24.1 00:10:42.420 [257/268] Linking target lib/librte_compressdev.so.24.1 00:10:42.420 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:10:42.420 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:10:42.697 [260/268] Linking target lib/librte_cmdline.so.24.1 00:10:42.697 [261/268] Linking target lib/librte_hash.so.24.1 00:10:42.697 [262/268] Linking target lib/librte_security.so.24.1 00:10:42.697 [263/268] Linking target lib/librte_ethdev.so.24.1 00:10:42.697 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:10:42.697 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:10:42.959 [266/268] Linking target lib/librte_power.so.24.1 00:10:43.524 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:10:43.524 [268/268] Linking target lib/librte_vhost.so.24.1 00:10:43.524 INFO: autodetecting backend as ninja 00:10:43.524 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:10:44.897 CC lib/ut/ut.o 00:10:44.898 CC lib/ut_mock/mock.o 00:10:44.898 CC lib/log/log.o 00:10:44.898 CC lib/log/log_flags.o 00:10:44.898 CC lib/log/log_deprecated.o 00:10:45.156 LIB libspdk_ut.a 00:10:45.156 LIB libspdk_ut_mock.a 00:10:45.156 LIB libspdk_log.a 00:10:45.156 SO libspdk_ut.so.2.0 00:10:45.156 SO libspdk_ut_mock.so.6.0 00:10:45.156 SO libspdk_log.so.7.0 00:10:45.156 SYMLINK libspdk_ut_mock.so 00:10:45.156 SYMLINK libspdk_ut.so 00:10:45.156 SYMLINK libspdk_log.so 00:10:45.415 CC lib/dma/dma.o 00:10:45.415 CC lib/util/base64.o 00:10:45.415 CC lib/util/bit_array.o 00:10:45.415 CC lib/util/crc32.o 00:10:45.415 CC lib/util/cpuset.o 00:10:45.415 CC lib/util/crc32c.o 00:10:45.415 CC lib/util/crc16.o 00:10:45.415 CC lib/ioat/ioat.o 00:10:45.673 CXX lib/trace_parser/trace.o 00:10:45.673 CC lib/util/crc32_ieee.o 00:10:45.673 CC lib/vfio_user/host/vfio_user_pci.o 00:10:45.673 CC lib/util/crc64.o 00:10:45.674 CC lib/util/dif.o 00:10:45.674 CC lib/util/fd.o 00:10:45.674 LIB libspdk_dma.a 00:10:45.674 CC lib/util/file.o 00:10:45.674 SO libspdk_dma.so.4.0 00:10:45.674 CC lib/vfio_user/host/vfio_user.o 00:10:45.674 CC lib/util/hexlify.o 00:10:45.674 CC lib/util/iov.o 00:10:45.932 SYMLINK libspdk_dma.so 00:10:45.932 CC lib/util/math.o 00:10:45.932 LIB libspdk_ioat.a 00:10:45.932 CC lib/util/pipe.o 00:10:45.932 CC lib/util/strerror_tls.o 00:10:45.932 SO libspdk_ioat.so.7.0 00:10:45.932 CC lib/util/string.o 00:10:45.932 SYMLINK libspdk_ioat.so 00:10:45.932 CC lib/util/uuid.o 00:10:45.932 CC lib/util/fd_group.o 00:10:45.932 CC lib/util/xor.o 00:10:45.932 LIB libspdk_vfio_user.a 00:10:45.932 CC lib/util/zipf.o 00:10:45.932 SO libspdk_vfio_user.so.5.0 00:10:46.191 SYMLINK libspdk_vfio_user.so 00:10:46.450 LIB libspdk_util.a 00:10:46.450 SO libspdk_util.so.9.1 00:10:46.709 LIB libspdk_trace_parser.a 00:10:46.709 SYMLINK libspdk_util.so 00:10:46.709 SO libspdk_trace_parser.so.5.0 00:10:46.709 SYMLINK libspdk_trace_parser.so 00:10:46.967 CC lib/idxd/idxd.o 00:10:46.967 CC lib/vmd/vmd.o 00:10:46.967 CC lib/conf/conf.o 00:10:46.967 CC lib/idxd/idxd_user.o 00:10:46.967 CC lib/idxd/idxd_kernel.o 00:10:46.967 CC lib/vmd/led.o 00:10:46.967 CC lib/json/json_parse.o 00:10:46.967 CC lib/rdma_provider/common.o 00:10:46.967 CC lib/env_dpdk/env.o 00:10:46.967 CC lib/rdma_utils/rdma_utils.o 00:10:46.967 CC lib/env_dpdk/memory.o 00:10:46.967 CC lib/env_dpdk/pci.o 00:10:46.967 CC lib/rdma_provider/rdma_provider_verbs.o 00:10:47.225 CC lib/json/json_util.o 00:10:47.225 CC lib/json/json_write.o 00:10:47.225 LIB libspdk_conf.a 00:10:47.225 SO libspdk_conf.so.6.0 00:10:47.225 LIB libspdk_rdma_utils.a 00:10:47.225 SO libspdk_rdma_utils.so.1.0 00:10:47.225 LIB libspdk_rdma_provider.a 00:10:47.225 SYMLINK libspdk_conf.so 00:10:47.225 CC lib/env_dpdk/init.o 00:10:47.225 SO libspdk_rdma_provider.so.6.0 00:10:47.225 SYMLINK libspdk_rdma_utils.so 00:10:47.225 CC lib/env_dpdk/threads.o 00:10:47.225 CC lib/env_dpdk/pci_ioat.o 00:10:47.225 SYMLINK libspdk_rdma_provider.so 00:10:47.484 CC lib/env_dpdk/pci_virtio.o 00:10:47.484 CC lib/env_dpdk/pci_vmd.o 00:10:47.484 LIB libspdk_json.a 00:10:47.484 SO libspdk_json.so.6.0 00:10:47.484 CC lib/env_dpdk/pci_idxd.o 00:10:47.484 CC lib/env_dpdk/pci_event.o 00:10:47.484 CC lib/env_dpdk/sigbus_handler.o 00:10:47.484 LIB libspdk_idxd.a 00:10:47.484 SYMLINK libspdk_json.so 00:10:47.484 CC lib/env_dpdk/pci_dpdk.o 00:10:47.484 SO libspdk_idxd.so.12.0 00:10:47.484 LIB libspdk_vmd.a 00:10:47.484 CC lib/env_dpdk/pci_dpdk_2207.o 00:10:47.742 SYMLINK libspdk_idxd.so 00:10:47.742 CC lib/env_dpdk/pci_dpdk_2211.o 00:10:47.742 SO libspdk_vmd.so.6.0 00:10:47.742 SYMLINK libspdk_vmd.so 00:10:47.742 CC lib/jsonrpc/jsonrpc_server.o 00:10:47.742 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:10:47.742 CC lib/jsonrpc/jsonrpc_client.o 00:10:47.742 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:10:48.002 LIB libspdk_jsonrpc.a 00:10:48.261 SO libspdk_jsonrpc.so.6.0 00:10:48.261 SYMLINK libspdk_jsonrpc.so 00:10:48.520 LIB libspdk_env_dpdk.a 00:10:48.779 CC lib/rpc/rpc.o 00:10:48.779 SO libspdk_env_dpdk.so.14.1 00:10:48.779 LIB libspdk_rpc.a 00:10:49.037 SYMLINK libspdk_env_dpdk.so 00:10:49.037 SO libspdk_rpc.so.6.0 00:10:49.037 SYMLINK libspdk_rpc.so 00:10:49.297 CC lib/keyring/keyring.o 00:10:49.297 CC lib/keyring/keyring_rpc.o 00:10:49.297 CC lib/trace/trace.o 00:10:49.297 CC lib/trace/trace_flags.o 00:10:49.297 CC lib/trace/trace_rpc.o 00:10:49.297 CC lib/notify/notify.o 00:10:49.297 CC lib/notify/notify_rpc.o 00:10:49.556 LIB libspdk_notify.a 00:10:49.556 LIB libspdk_keyring.a 00:10:49.556 SO libspdk_notify.so.6.0 00:10:49.815 LIB libspdk_trace.a 00:10:49.815 SO libspdk_keyring.so.1.0 00:10:49.815 SYMLINK libspdk_notify.so 00:10:49.815 SO libspdk_trace.so.10.0 00:10:49.815 SYMLINK libspdk_keyring.so 00:10:49.815 SYMLINK libspdk_trace.so 00:10:50.382 CC lib/thread/thread.o 00:10:50.382 CC lib/thread/iobuf.o 00:10:50.382 CC lib/sock/sock.o 00:10:50.382 CC lib/sock/sock_rpc.o 00:10:50.641 LIB libspdk_sock.a 00:10:50.900 SO libspdk_sock.so.10.0 00:10:50.900 SYMLINK libspdk_sock.so 00:10:51.200 CC lib/nvme/nvme_fabric.o 00:10:51.200 CC lib/nvme/nvme_ctrlr_cmd.o 00:10:51.200 CC lib/nvme/nvme_ctrlr.o 00:10:51.200 CC lib/nvme/nvme_ns_cmd.o 00:10:51.200 CC lib/nvme/nvme_ns.o 00:10:51.200 CC lib/nvme/nvme_pcie_common.o 00:10:51.200 CC lib/nvme/nvme.o 00:10:51.200 CC lib/nvme/nvme_pcie.o 00:10:51.200 CC lib/nvme/nvme_qpair.o 00:10:52.138 CC lib/nvme/nvme_quirks.o 00:10:52.138 LIB libspdk_thread.a 00:10:52.138 CC lib/nvme/nvme_transport.o 00:10:52.138 CC lib/nvme/nvme_discovery.o 00:10:52.138 SO libspdk_thread.so.10.1 00:10:52.138 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:10:52.138 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:10:52.138 CC lib/nvme/nvme_tcp.o 00:10:52.138 SYMLINK libspdk_thread.so 00:10:52.138 CC lib/nvme/nvme_opal.o 00:10:52.138 CC lib/nvme/nvme_io_msg.o 00:10:52.397 CC lib/nvme/nvme_poll_group.o 00:10:52.397 CC lib/nvme/nvme_zns.o 00:10:52.656 CC lib/nvme/nvme_stubs.o 00:10:52.656 CC lib/nvme/nvme_auth.o 00:10:52.656 CC lib/nvme/nvme_cuse.o 00:10:52.915 CC lib/accel/accel.o 00:10:52.915 CC lib/nvme/nvme_rdma.o 00:10:52.915 CC lib/accel/accel_rpc.o 00:10:52.915 CC lib/accel/accel_sw.o 00:10:53.175 CC lib/blob/blobstore.o 00:10:53.175 CC lib/init/json_config.o 00:10:53.434 CC lib/init/subsystem.o 00:10:53.434 CC lib/virtio/virtio.o 00:10:53.434 CC lib/virtio/virtio_vhost_user.o 00:10:53.434 CC lib/init/subsystem_rpc.o 00:10:53.692 CC lib/virtio/virtio_vfio_user.o 00:10:53.692 CC lib/virtio/virtio_pci.o 00:10:53.692 CC lib/init/rpc.o 00:10:53.692 CC lib/blob/request.o 00:10:53.692 CC lib/blob/zeroes.o 00:10:53.950 LIB libspdk_init.a 00:10:53.950 CC lib/blob/blob_bs_dev.o 00:10:53.950 SO libspdk_init.so.5.0 00:10:53.950 SYMLINK libspdk_init.so 00:10:53.950 LIB libspdk_accel.a 00:10:53.950 LIB libspdk_virtio.a 00:10:53.950 SO libspdk_accel.so.15.1 00:10:53.950 SO libspdk_virtio.so.7.0 00:10:54.208 SYMLINK libspdk_accel.so 00:10:54.208 SYMLINK libspdk_virtio.so 00:10:54.208 LIB libspdk_nvme.a 00:10:54.208 CC lib/event/app.o 00:10:54.208 CC lib/event/reactor.o 00:10:54.208 CC lib/event/app_rpc.o 00:10:54.208 CC lib/event/log_rpc.o 00:10:54.208 CC lib/event/scheduler_static.o 00:10:54.466 SO libspdk_nvme.so.13.1 00:10:54.466 CC lib/bdev/bdev_rpc.o 00:10:54.466 CC lib/bdev/bdev.o 00:10:54.466 CC lib/bdev/bdev_zone.o 00:10:54.466 CC lib/bdev/part.o 00:10:54.466 CC lib/bdev/scsi_nvme.o 00:10:55.033 LIB libspdk_event.a 00:10:55.033 SO libspdk_event.so.14.0 00:10:55.033 SYMLINK libspdk_nvme.so 00:10:55.033 SYMLINK libspdk_event.so 00:10:56.934 LIB libspdk_blob.a 00:10:56.934 SO libspdk_blob.so.11.0 00:10:57.193 SYMLINK libspdk_blob.so 00:10:57.451 CC lib/lvol/lvol.o 00:10:57.451 CC lib/blobfs/blobfs.o 00:10:57.451 CC lib/blobfs/tree.o 00:10:57.710 LIB libspdk_bdev.a 00:10:57.710 SO libspdk_bdev.so.15.1 00:10:57.968 SYMLINK libspdk_bdev.so 00:10:58.226 CC lib/nvmf/ctrlr_discovery.o 00:10:58.226 CC lib/nvmf/ctrlr.o 00:10:58.226 CC lib/nvmf/ctrlr_bdev.o 00:10:58.226 CC lib/nvmf/subsystem.o 00:10:58.226 CC lib/scsi/dev.o 00:10:58.226 CC lib/ublk/ublk.o 00:10:58.226 CC lib/ftl/ftl_core.o 00:10:58.226 CC lib/nbd/nbd.o 00:10:58.485 LIB libspdk_blobfs.a 00:10:58.485 SO libspdk_blobfs.so.10.0 00:10:58.485 CC lib/scsi/lun.o 00:10:58.485 CC lib/ftl/ftl_init.o 00:10:58.485 LIB libspdk_lvol.a 00:10:58.485 SYMLINK libspdk_blobfs.so 00:10:58.485 CC lib/scsi/port.o 00:10:58.744 SO libspdk_lvol.so.10.0 00:10:58.744 CC lib/scsi/scsi.o 00:10:58.744 CC lib/nbd/nbd_rpc.o 00:10:58.744 SYMLINK libspdk_lvol.so 00:10:58.744 CC lib/scsi/scsi_bdev.o 00:10:58.744 CC lib/ftl/ftl_layout.o 00:10:58.744 CC lib/ublk/ublk_rpc.o 00:10:58.744 CC lib/nvmf/nvmf.o 00:10:58.744 CC lib/scsi/scsi_pr.o 00:10:58.744 LIB libspdk_nbd.a 00:10:59.002 SO libspdk_nbd.so.7.0 00:10:59.002 CC lib/scsi/scsi_rpc.o 00:10:59.002 LIB libspdk_ublk.a 00:10:59.002 SYMLINK libspdk_nbd.so 00:10:59.002 CC lib/nvmf/nvmf_rpc.o 00:10:59.002 CC lib/nvmf/transport.o 00:10:59.002 SO libspdk_ublk.so.3.0 00:10:59.002 SYMLINK libspdk_ublk.so 00:10:59.002 CC lib/scsi/task.o 00:10:59.002 CC lib/ftl/ftl_debug.o 00:10:59.002 CC lib/ftl/ftl_io.o 00:10:59.261 CC lib/ftl/ftl_sb.o 00:10:59.261 CC lib/ftl/ftl_l2p.o 00:10:59.261 CC lib/ftl/ftl_l2p_flat.o 00:10:59.261 LIB libspdk_scsi.a 00:10:59.261 CC lib/ftl/ftl_nv_cache.o 00:10:59.261 CC lib/ftl/ftl_band.o 00:10:59.520 SO libspdk_scsi.so.9.0 00:10:59.520 CC lib/ftl/ftl_band_ops.o 00:10:59.520 CC lib/nvmf/tcp.o 00:10:59.520 CC lib/nvmf/stubs.o 00:10:59.520 SYMLINK libspdk_scsi.so 00:10:59.520 CC lib/nvmf/mdns_server.o 00:10:59.779 CC lib/nvmf/rdma.o 00:10:59.779 CC lib/ftl/ftl_writer.o 00:10:59.779 CC lib/nvmf/auth.o 00:10:59.779 CC lib/ftl/ftl_rq.o 00:10:59.779 CC lib/ftl/ftl_reloc.o 00:11:00.039 CC lib/ftl/ftl_l2p_cache.o 00:11:00.039 CC lib/ftl/ftl_p2l.o 00:11:00.039 CC lib/ftl/mngt/ftl_mngt.o 00:11:00.298 CC lib/vhost/vhost.o 00:11:00.298 CC lib/iscsi/conn.o 00:11:00.298 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:11:00.298 CC lib/vhost/vhost_rpc.o 00:11:00.298 CC lib/vhost/vhost_scsi.o 00:11:00.556 CC lib/vhost/vhost_blk.o 00:11:00.556 CC lib/vhost/rte_vhost_user.o 00:11:00.556 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:11:00.815 CC lib/iscsi/init_grp.o 00:11:00.815 CC lib/ftl/mngt/ftl_mngt_startup.o 00:11:00.815 CC lib/iscsi/iscsi.o 00:11:00.815 CC lib/iscsi/md5.o 00:11:00.815 CC lib/ftl/mngt/ftl_mngt_md.o 00:11:01.074 CC lib/iscsi/param.o 00:11:01.074 CC lib/ftl/mngt/ftl_mngt_misc.o 00:11:01.074 CC lib/iscsi/portal_grp.o 00:11:01.332 CC lib/iscsi/tgt_node.o 00:11:01.332 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:11:01.332 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:11:01.332 CC lib/ftl/mngt/ftl_mngt_band.o 00:11:01.332 CC lib/iscsi/iscsi_subsystem.o 00:11:01.332 CC lib/iscsi/iscsi_rpc.o 00:11:01.332 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:11:01.332 CC lib/iscsi/task.o 00:11:01.591 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:11:01.591 LIB libspdk_vhost.a 00:11:01.591 SO libspdk_vhost.so.8.0 00:11:01.591 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:11:01.591 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:11:01.591 CC lib/ftl/utils/ftl_conf.o 00:11:01.591 CC lib/ftl/utils/ftl_md.o 00:11:01.591 SYMLINK libspdk_vhost.so 00:11:01.591 CC lib/ftl/utils/ftl_mempool.o 00:11:01.591 CC lib/ftl/utils/ftl_bitmap.o 00:11:01.850 CC lib/ftl/utils/ftl_property.o 00:11:01.850 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:11:01.850 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:11:01.850 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:11:01.850 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:11:01.850 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:11:01.850 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:11:01.850 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:11:02.108 CC lib/ftl/upgrade/ftl_sb_v3.o 00:11:02.108 CC lib/ftl/upgrade/ftl_sb_v5.o 00:11:02.108 CC lib/ftl/nvc/ftl_nvc_dev.o 00:11:02.108 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:11:02.108 CC lib/ftl/base/ftl_base_dev.o 00:11:02.108 LIB libspdk_nvmf.a 00:11:02.108 CC lib/ftl/base/ftl_base_bdev.o 00:11:02.108 CC lib/ftl/ftl_trace.o 00:11:02.391 SO libspdk_nvmf.so.18.1 00:11:02.391 LIB libspdk_iscsi.a 00:11:02.391 SO libspdk_iscsi.so.8.0 00:11:02.391 LIB libspdk_ftl.a 00:11:02.671 SYMLINK libspdk_nvmf.so 00:11:02.671 SYMLINK libspdk_iscsi.so 00:11:02.671 SO libspdk_ftl.so.9.0 00:11:02.930 SYMLINK libspdk_ftl.so 00:11:03.498 CC module/env_dpdk/env_dpdk_rpc.o 00:11:03.498 CC module/sock/posix/posix.o 00:11:03.498 CC module/keyring/file/keyring.o 00:11:03.498 CC module/keyring/linux/keyring.o 00:11:03.498 CC module/scheduler/gscheduler/gscheduler.o 00:11:03.498 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:11:03.498 CC module/blob/bdev/blob_bdev.o 00:11:03.498 CC module/accel/ioat/accel_ioat.o 00:11:03.498 CC module/scheduler/dynamic/scheduler_dynamic.o 00:11:03.498 CC module/accel/error/accel_error.o 00:11:03.498 LIB libspdk_env_dpdk_rpc.a 00:11:03.498 SO libspdk_env_dpdk_rpc.so.6.0 00:11:03.756 CC module/keyring/file/keyring_rpc.o 00:11:03.756 CC module/keyring/linux/keyring_rpc.o 00:11:03.756 LIB libspdk_scheduler_gscheduler.a 00:11:03.756 LIB libspdk_scheduler_dpdk_governor.a 00:11:03.756 SYMLINK libspdk_env_dpdk_rpc.so 00:11:03.756 CC module/accel/error/accel_error_rpc.o 00:11:03.756 SO libspdk_scheduler_gscheduler.so.4.0 00:11:03.756 SO libspdk_scheduler_dpdk_governor.so.4.0 00:11:03.756 CC module/accel/ioat/accel_ioat_rpc.o 00:11:03.756 LIB libspdk_scheduler_dynamic.a 00:11:03.756 SYMLINK libspdk_scheduler_gscheduler.so 00:11:03.756 SYMLINK libspdk_scheduler_dpdk_governor.so 00:11:03.756 SO libspdk_scheduler_dynamic.so.4.0 00:11:03.756 LIB libspdk_keyring_file.a 00:11:03.756 LIB libspdk_keyring_linux.a 00:11:03.756 LIB libspdk_blob_bdev.a 00:11:03.756 SYMLINK libspdk_scheduler_dynamic.so 00:11:03.756 LIB libspdk_accel_error.a 00:11:03.756 SO libspdk_keyring_file.so.1.0 00:11:03.756 SO libspdk_keyring_linux.so.1.0 00:11:03.756 SO libspdk_blob_bdev.so.11.0 00:11:03.756 LIB libspdk_accel_ioat.a 00:11:03.756 SO libspdk_accel_error.so.2.0 00:11:03.756 SYMLINK libspdk_blob_bdev.so 00:11:03.756 SO libspdk_accel_ioat.so.6.0 00:11:04.014 SYMLINK libspdk_keyring_linux.so 00:11:04.014 SYMLINK libspdk_keyring_file.so 00:11:04.014 CC module/accel/iaa/accel_iaa.o 00:11:04.014 CC module/accel/iaa/accel_iaa_rpc.o 00:11:04.014 SYMLINK libspdk_accel_ioat.so 00:11:04.014 CC module/accel/dsa/accel_dsa.o 00:11:04.014 CC module/accel/dsa/accel_dsa_rpc.o 00:11:04.014 SYMLINK libspdk_accel_error.so 00:11:04.014 LIB libspdk_accel_iaa.a 00:11:04.274 CC module/blobfs/bdev/blobfs_bdev.o 00:11:04.274 SO libspdk_accel_iaa.so.3.0 00:11:04.274 CC module/bdev/gpt/gpt.o 00:11:04.274 CC module/bdev/lvol/vbdev_lvol.o 00:11:04.274 CC module/bdev/delay/vbdev_delay.o 00:11:04.274 CC module/bdev/error/vbdev_error.o 00:11:04.274 LIB libspdk_accel_dsa.a 00:11:04.274 SO libspdk_accel_dsa.so.5.0 00:11:04.274 SYMLINK libspdk_accel_iaa.so 00:11:04.274 CC module/bdev/malloc/bdev_malloc.o 00:11:04.274 CC module/bdev/malloc/bdev_malloc_rpc.o 00:11:04.274 CC module/bdev/null/bdev_null.o 00:11:04.274 LIB libspdk_sock_posix.a 00:11:04.274 SYMLINK libspdk_accel_dsa.so 00:11:04.274 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:11:04.274 CC module/bdev/error/vbdev_error_rpc.o 00:11:04.274 SO libspdk_sock_posix.so.6.0 00:11:04.274 CC module/bdev/gpt/vbdev_gpt.o 00:11:04.532 SYMLINK libspdk_sock_posix.so 00:11:04.532 CC module/bdev/delay/vbdev_delay_rpc.o 00:11:04.532 LIB libspdk_blobfs_bdev.a 00:11:04.532 LIB libspdk_bdev_error.a 00:11:04.532 SO libspdk_blobfs_bdev.so.6.0 00:11:04.533 SO libspdk_bdev_error.so.6.0 00:11:04.533 CC module/bdev/null/bdev_null_rpc.o 00:11:04.533 SYMLINK libspdk_bdev_error.so 00:11:04.533 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:11:04.533 SYMLINK libspdk_blobfs_bdev.so 00:11:04.533 LIB libspdk_bdev_delay.a 00:11:04.533 CC module/bdev/passthru/vbdev_passthru.o 00:11:04.533 CC module/bdev/nvme/bdev_nvme.o 00:11:04.533 SO libspdk_bdev_delay.so.6.0 00:11:04.790 LIB libspdk_bdev_gpt.a 00:11:04.790 LIB libspdk_bdev_malloc.a 00:11:04.790 SO libspdk_bdev_gpt.so.6.0 00:11:04.790 SO libspdk_bdev_malloc.so.6.0 00:11:04.790 CC module/bdev/nvme/bdev_nvme_rpc.o 00:11:04.790 LIB libspdk_bdev_null.a 00:11:04.790 SYMLINK libspdk_bdev_delay.so 00:11:04.790 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:11:04.790 CC module/bdev/raid/bdev_raid.o 00:11:04.790 SYMLINK libspdk_bdev_gpt.so 00:11:04.790 SYMLINK libspdk_bdev_malloc.so 00:11:04.790 SO libspdk_bdev_null.so.6.0 00:11:04.790 CC module/bdev/split/vbdev_split.o 00:11:04.790 SYMLINK libspdk_bdev_null.so 00:11:04.790 CC module/bdev/split/vbdev_split_rpc.o 00:11:04.790 CC module/bdev/raid/bdev_raid_rpc.o 00:11:05.048 CC module/bdev/zone_block/vbdev_zone_block.o 00:11:05.048 CC module/bdev/xnvme/bdev_xnvme.o 00:11:05.048 LIB libspdk_bdev_passthru.a 00:11:05.048 SO libspdk_bdev_passthru.so.6.0 00:11:05.048 LIB libspdk_bdev_lvol.a 00:11:05.048 SO libspdk_bdev_lvol.so.6.0 00:11:05.048 LIB libspdk_bdev_split.a 00:11:05.048 SYMLINK libspdk_bdev_passthru.so 00:11:05.048 CC module/bdev/raid/bdev_raid_sb.o 00:11:05.048 SYMLINK libspdk_bdev_lvol.so 00:11:05.048 SO libspdk_bdev_split.so.6.0 00:11:05.048 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:11:05.305 SYMLINK libspdk_bdev_split.so 00:11:05.305 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:11:05.306 CC module/bdev/nvme/nvme_rpc.o 00:11:05.306 CC module/bdev/aio/bdev_aio.o 00:11:05.306 CC module/bdev/raid/raid0.o 00:11:05.306 LIB libspdk_bdev_zone_block.a 00:11:05.306 SO libspdk_bdev_zone_block.so.6.0 00:11:05.306 LIB libspdk_bdev_xnvme.a 00:11:05.306 CC module/bdev/ftl/bdev_ftl.o 00:11:05.306 SYMLINK libspdk_bdev_zone_block.so 00:11:05.306 CC module/bdev/ftl/bdev_ftl_rpc.o 00:11:05.306 SO libspdk_bdev_xnvme.so.3.0 00:11:05.563 CC module/bdev/raid/raid1.o 00:11:05.563 CC module/bdev/raid/concat.o 00:11:05.563 SYMLINK libspdk_bdev_xnvme.so 00:11:05.563 CC module/bdev/nvme/bdev_mdns_client.o 00:11:05.563 CC module/bdev/iscsi/bdev_iscsi.o 00:11:05.563 CC module/bdev/aio/bdev_aio_rpc.o 00:11:05.563 CC module/bdev/nvme/vbdev_opal.o 00:11:05.563 CC module/bdev/virtio/bdev_virtio_scsi.o 00:11:05.821 LIB libspdk_bdev_ftl.a 00:11:05.821 CC module/bdev/virtio/bdev_virtio_blk.o 00:11:05.821 CC module/bdev/virtio/bdev_virtio_rpc.o 00:11:05.821 CC module/bdev/nvme/vbdev_opal_rpc.o 00:11:05.821 SO libspdk_bdev_ftl.so.6.0 00:11:05.821 LIB libspdk_bdev_aio.a 00:11:05.821 SO libspdk_bdev_aio.so.6.0 00:11:05.821 SYMLINK libspdk_bdev_ftl.so 00:11:05.821 LIB libspdk_bdev_raid.a 00:11:05.821 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:11:05.821 SO libspdk_bdev_raid.so.6.0 00:11:05.821 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:11:05.821 SYMLINK libspdk_bdev_aio.so 00:11:06.080 SYMLINK libspdk_bdev_raid.so 00:11:06.080 LIB libspdk_bdev_iscsi.a 00:11:06.080 SO libspdk_bdev_iscsi.so.6.0 00:11:06.080 SYMLINK libspdk_bdev_iscsi.so 00:11:06.337 LIB libspdk_bdev_virtio.a 00:11:06.337 SO libspdk_bdev_virtio.so.6.0 00:11:06.337 SYMLINK libspdk_bdev_virtio.so 00:11:07.277 LIB libspdk_bdev_nvme.a 00:11:07.277 SO libspdk_bdev_nvme.so.7.0 00:11:07.535 SYMLINK libspdk_bdev_nvme.so 00:11:08.100 CC module/event/subsystems/sock/sock.o 00:11:08.100 CC module/event/subsystems/keyring/keyring.o 00:11:08.100 CC module/event/subsystems/scheduler/scheduler.o 00:11:08.100 CC module/event/subsystems/iobuf/iobuf.o 00:11:08.100 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:11:08.101 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:11:08.101 CC module/event/subsystems/vmd/vmd.o 00:11:08.101 CC module/event/subsystems/vmd/vmd_rpc.o 00:11:08.101 LIB libspdk_event_scheduler.a 00:11:08.101 LIB libspdk_event_sock.a 00:11:08.101 LIB libspdk_event_vhost_blk.a 00:11:08.101 LIB libspdk_event_keyring.a 00:11:08.101 LIB libspdk_event_iobuf.a 00:11:08.101 SO libspdk_event_scheduler.so.4.0 00:11:08.101 SO libspdk_event_sock.so.5.0 00:11:08.101 SO libspdk_event_keyring.so.1.0 00:11:08.101 LIB libspdk_event_vmd.a 00:11:08.101 SO libspdk_event_vhost_blk.so.3.0 00:11:08.101 SO libspdk_event_iobuf.so.3.0 00:11:08.359 SO libspdk_event_vmd.so.6.0 00:11:08.359 SYMLINK libspdk_event_keyring.so 00:11:08.359 SYMLINK libspdk_event_scheduler.so 00:11:08.359 SYMLINK libspdk_event_vhost_blk.so 00:11:08.359 SYMLINK libspdk_event_sock.so 00:11:08.359 SYMLINK libspdk_event_iobuf.so 00:11:08.359 SYMLINK libspdk_event_vmd.so 00:11:08.617 CC module/event/subsystems/accel/accel.o 00:11:08.876 LIB libspdk_event_accel.a 00:11:08.876 SO libspdk_event_accel.so.6.0 00:11:08.876 SYMLINK libspdk_event_accel.so 00:11:09.444 CC module/event/subsystems/bdev/bdev.o 00:11:09.703 LIB libspdk_event_bdev.a 00:11:09.703 SO libspdk_event_bdev.so.6.0 00:11:09.703 SYMLINK libspdk_event_bdev.so 00:11:09.962 CC module/event/subsystems/nbd/nbd.o 00:11:09.962 CC module/event/subsystems/ublk/ublk.o 00:11:09.962 CC module/event/subsystems/scsi/scsi.o 00:11:09.962 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:11:09.962 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:11:10.221 LIB libspdk_event_nbd.a 00:11:10.221 LIB libspdk_event_ublk.a 00:11:10.221 LIB libspdk_event_scsi.a 00:11:10.221 SO libspdk_event_ublk.so.3.0 00:11:10.221 SO libspdk_event_nbd.so.6.0 00:11:10.221 SO libspdk_event_scsi.so.6.0 00:11:10.221 SYMLINK libspdk_event_ublk.so 00:11:10.221 SYMLINK libspdk_event_scsi.so 00:11:10.221 SYMLINK libspdk_event_nbd.so 00:11:10.221 LIB libspdk_event_nvmf.a 00:11:10.479 SO libspdk_event_nvmf.so.6.0 00:11:10.479 SYMLINK libspdk_event_nvmf.so 00:11:10.738 CC module/event/subsystems/iscsi/iscsi.o 00:11:10.738 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:11:10.738 LIB libspdk_event_iscsi.a 00:11:10.997 LIB libspdk_event_vhost_scsi.a 00:11:10.997 SO libspdk_event_iscsi.so.6.0 00:11:10.997 SO libspdk_event_vhost_scsi.so.3.0 00:11:10.997 SYMLINK libspdk_event_vhost_scsi.so 00:11:10.997 SYMLINK libspdk_event_iscsi.so 00:11:11.254 SO libspdk.so.6.0 00:11:11.255 SYMLINK libspdk.so 00:11:11.513 CC test/rpc_client/rpc_client_test.o 00:11:11.513 TEST_HEADER include/spdk/accel.h 00:11:11.513 TEST_HEADER include/spdk/accel_module.h 00:11:11.513 TEST_HEADER include/spdk/assert.h 00:11:11.513 CXX app/trace/trace.o 00:11:11.513 CC app/trace_record/trace_record.o 00:11:11.513 TEST_HEADER include/spdk/barrier.h 00:11:11.513 TEST_HEADER include/spdk/base64.h 00:11:11.513 TEST_HEADER include/spdk/bdev.h 00:11:11.513 TEST_HEADER include/spdk/bdev_module.h 00:11:11.513 TEST_HEADER include/spdk/bdev_zone.h 00:11:11.513 TEST_HEADER include/spdk/bit_array.h 00:11:11.513 TEST_HEADER include/spdk/bit_pool.h 00:11:11.513 TEST_HEADER include/spdk/blob_bdev.h 00:11:11.513 TEST_HEADER include/spdk/blobfs_bdev.h 00:11:11.513 TEST_HEADER include/spdk/blobfs.h 00:11:11.513 TEST_HEADER include/spdk/blob.h 00:11:11.513 TEST_HEADER include/spdk/conf.h 00:11:11.513 TEST_HEADER include/spdk/config.h 00:11:11.513 TEST_HEADER include/spdk/cpuset.h 00:11:11.513 TEST_HEADER include/spdk/crc16.h 00:11:11.513 TEST_HEADER include/spdk/crc32.h 00:11:11.513 TEST_HEADER include/spdk/crc64.h 00:11:11.513 TEST_HEADER include/spdk/dif.h 00:11:11.513 TEST_HEADER include/spdk/dma.h 00:11:11.513 TEST_HEADER include/spdk/endian.h 00:11:11.513 TEST_HEADER include/spdk/env_dpdk.h 00:11:11.513 TEST_HEADER include/spdk/env.h 00:11:11.513 TEST_HEADER include/spdk/event.h 00:11:11.513 CC app/nvmf_tgt/nvmf_main.o 00:11:11.513 TEST_HEADER include/spdk/fd_group.h 00:11:11.513 TEST_HEADER include/spdk/fd.h 00:11:11.513 TEST_HEADER include/spdk/file.h 00:11:11.513 TEST_HEADER include/spdk/ftl.h 00:11:11.513 TEST_HEADER include/spdk/gpt_spec.h 00:11:11.513 TEST_HEADER include/spdk/hexlify.h 00:11:11.513 TEST_HEADER include/spdk/histogram_data.h 00:11:11.513 TEST_HEADER include/spdk/idxd.h 00:11:11.513 CC test/thread/poller_perf/poller_perf.o 00:11:11.513 TEST_HEADER include/spdk/idxd_spec.h 00:11:11.513 TEST_HEADER include/spdk/init.h 00:11:11.513 TEST_HEADER include/spdk/ioat.h 00:11:11.513 TEST_HEADER include/spdk/ioat_spec.h 00:11:11.513 TEST_HEADER include/spdk/iscsi_spec.h 00:11:11.513 TEST_HEADER include/spdk/json.h 00:11:11.513 TEST_HEADER include/spdk/jsonrpc.h 00:11:11.513 TEST_HEADER include/spdk/keyring.h 00:11:11.513 CC examples/util/zipf/zipf.o 00:11:11.513 TEST_HEADER include/spdk/keyring_module.h 00:11:11.513 TEST_HEADER include/spdk/likely.h 00:11:11.513 TEST_HEADER include/spdk/log.h 00:11:11.513 TEST_HEADER include/spdk/lvol.h 00:11:11.513 TEST_HEADER include/spdk/memory.h 00:11:11.513 TEST_HEADER include/spdk/mmio.h 00:11:11.513 TEST_HEADER include/spdk/nbd.h 00:11:11.513 TEST_HEADER include/spdk/notify.h 00:11:11.513 TEST_HEADER include/spdk/nvme.h 00:11:11.513 TEST_HEADER include/spdk/nvme_intel.h 00:11:11.513 TEST_HEADER include/spdk/nvme_ocssd.h 00:11:11.513 CC test/app/bdev_svc/bdev_svc.o 00:11:11.513 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:11:11.513 TEST_HEADER include/spdk/nvme_spec.h 00:11:11.513 TEST_HEADER include/spdk/nvme_zns.h 00:11:11.513 TEST_HEADER include/spdk/nvmf_cmd.h 00:11:11.513 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:11:11.513 TEST_HEADER include/spdk/nvmf.h 00:11:11.513 TEST_HEADER include/spdk/nvmf_spec.h 00:11:11.513 TEST_HEADER include/spdk/nvmf_transport.h 00:11:11.513 CC test/dma/test_dma/test_dma.o 00:11:11.513 TEST_HEADER include/spdk/opal.h 00:11:11.513 TEST_HEADER include/spdk/opal_spec.h 00:11:11.513 TEST_HEADER include/spdk/pci_ids.h 00:11:11.771 TEST_HEADER include/spdk/pipe.h 00:11:11.771 TEST_HEADER include/spdk/queue.h 00:11:11.771 TEST_HEADER include/spdk/reduce.h 00:11:11.771 TEST_HEADER include/spdk/rpc.h 00:11:11.771 TEST_HEADER include/spdk/scheduler.h 00:11:11.771 TEST_HEADER include/spdk/scsi.h 00:11:11.771 TEST_HEADER include/spdk/scsi_spec.h 00:11:11.771 TEST_HEADER include/spdk/sock.h 00:11:11.771 TEST_HEADER include/spdk/stdinc.h 00:11:11.771 TEST_HEADER include/spdk/string.h 00:11:11.771 TEST_HEADER include/spdk/thread.h 00:11:11.771 TEST_HEADER include/spdk/trace.h 00:11:11.771 TEST_HEADER include/spdk/trace_parser.h 00:11:11.771 TEST_HEADER include/spdk/tree.h 00:11:11.771 TEST_HEADER include/spdk/ublk.h 00:11:11.771 LINK rpc_client_test 00:11:11.772 TEST_HEADER include/spdk/util.h 00:11:11.772 TEST_HEADER include/spdk/uuid.h 00:11:11.772 TEST_HEADER include/spdk/version.h 00:11:11.772 TEST_HEADER include/spdk/vfio_user_pci.h 00:11:11.772 TEST_HEADER include/spdk/vfio_user_spec.h 00:11:11.772 CC test/env/mem_callbacks/mem_callbacks.o 00:11:11.772 TEST_HEADER include/spdk/vhost.h 00:11:11.772 TEST_HEADER include/spdk/vmd.h 00:11:11.772 TEST_HEADER include/spdk/xor.h 00:11:11.772 TEST_HEADER include/spdk/zipf.h 00:11:11.772 CXX test/cpp_headers/accel.o 00:11:11.772 LINK poller_perf 00:11:11.772 LINK nvmf_tgt 00:11:11.772 LINK zipf 00:11:11.772 LINK spdk_trace_record 00:11:11.772 LINK bdev_svc 00:11:11.772 CXX test/cpp_headers/accel_module.o 00:11:11.772 LINK spdk_trace 00:11:12.030 CC test/env/vtophys/vtophys.o 00:11:12.030 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:11:12.030 LINK test_dma 00:11:12.030 CXX test/cpp_headers/assert.o 00:11:12.030 CC examples/ioat/perf/perf.o 00:11:12.030 CC examples/vmd/lsvmd/lsvmd.o 00:11:12.030 LINK vtophys 00:11:12.030 CC app/iscsi_tgt/iscsi_tgt.o 00:11:12.288 LINK env_dpdk_post_init 00:11:12.288 CXX test/cpp_headers/barrier.o 00:11:12.288 LINK mem_callbacks 00:11:12.288 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:11:12.288 CC app/spdk_tgt/spdk_tgt.o 00:11:12.288 CXX test/cpp_headers/base64.o 00:11:12.288 LINK lsvmd 00:11:12.288 LINK iscsi_tgt 00:11:12.288 LINK ioat_perf 00:11:12.288 LINK spdk_tgt 00:11:12.546 CXX test/cpp_headers/bdev.o 00:11:12.546 CC examples/idxd/perf/perf.o 00:11:12.546 CC test/env/memory/memory_ut.o 00:11:12.546 CC examples/ioat/verify/verify.o 00:11:12.546 CC test/event/event_perf/event_perf.o 00:11:12.546 CC examples/vmd/led/led.o 00:11:12.546 CC test/env/pci/pci_ut.o 00:11:12.546 CXX test/cpp_headers/bdev_module.o 00:11:12.546 CC test/app/histogram_perf/histogram_perf.o 00:11:12.546 LINK nvme_fuzz 00:11:12.805 LINK verify 00:11:12.805 LINK event_perf 00:11:12.805 LINK led 00:11:12.805 CC app/spdk_lspci/spdk_lspci.o 00:11:12.805 LINK histogram_perf 00:11:12.805 LINK idxd_perf 00:11:12.805 CXX test/cpp_headers/bdev_zone.o 00:11:12.805 CXX test/cpp_headers/bit_array.o 00:11:12.805 LINK spdk_lspci 00:11:13.063 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:11:13.063 CC test/event/reactor/reactor.o 00:11:13.063 CXX test/cpp_headers/bit_pool.o 00:11:13.063 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:11:13.063 CC examples/interrupt_tgt/interrupt_tgt.o 00:11:13.063 LINK pci_ut 00:11:13.063 CC test/app/jsoncat/jsoncat.o 00:11:13.063 LINK reactor 00:11:13.063 CC test/app/stub/stub.o 00:11:13.063 CXX test/cpp_headers/blob_bdev.o 00:11:13.063 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:11:13.063 CC app/spdk_nvme_perf/perf.o 00:11:13.322 LINK jsoncat 00:11:13.322 LINK interrupt_tgt 00:11:13.322 LINK stub 00:11:13.322 CXX test/cpp_headers/blobfs_bdev.o 00:11:13.322 CXX test/cpp_headers/blobfs.o 00:11:13.322 CC test/event/reactor_perf/reactor_perf.o 00:11:13.322 CXX test/cpp_headers/blob.o 00:11:13.582 CXX test/cpp_headers/conf.o 00:11:13.582 LINK reactor_perf 00:11:13.582 CXX test/cpp_headers/config.o 00:11:13.582 LINK vhost_fuzz 00:11:13.582 CXX test/cpp_headers/cpuset.o 00:11:13.582 LINK memory_ut 00:11:13.582 CC app/spdk_nvme_identify/identify.o 00:11:13.582 CC examples/thread/thread/thread_ex.o 00:11:13.582 CC examples/sock/hello_world/hello_sock.o 00:11:13.842 CXX test/cpp_headers/crc16.o 00:11:13.842 CC test/event/app_repeat/app_repeat.o 00:11:13.842 CC test/event/scheduler/scheduler.o 00:11:13.842 LINK thread 00:11:13.842 CC test/nvme/aer/aer.o 00:11:13.842 CXX test/cpp_headers/crc32.o 00:11:13.842 LINK app_repeat 00:11:14.100 LINK hello_sock 00:11:14.100 CC test/accel/dif/dif.o 00:11:14.100 LINK spdk_nvme_perf 00:11:14.100 LINK scheduler 00:11:14.100 CXX test/cpp_headers/crc64.o 00:11:14.100 CC test/nvme/reset/reset.o 00:11:14.359 LINK aer 00:11:14.359 CC test/nvme/sgl/sgl.o 00:11:14.359 CXX test/cpp_headers/dif.o 00:11:14.359 CC examples/nvme/hello_world/hello_world.o 00:11:14.359 CC examples/nvme/reconnect/reconnect.o 00:11:14.359 CC examples/nvme/nvme_manage/nvme_manage.o 00:11:14.359 CXX test/cpp_headers/dma.o 00:11:14.359 LINK reset 00:11:14.618 LINK dif 00:11:14.618 CC examples/nvme/arbitration/arbitration.o 00:11:14.618 LINK sgl 00:11:14.618 CXX test/cpp_headers/endian.o 00:11:14.618 LINK hello_world 00:11:14.618 LINK spdk_nvme_identify 00:11:14.618 LINK reconnect 00:11:14.876 CXX test/cpp_headers/env_dpdk.o 00:11:14.876 CC test/nvme/e2edp/nvme_dp.o 00:11:14.876 LINK iscsi_fuzz 00:11:14.876 CC test/nvme/overhead/overhead.o 00:11:14.876 LINK arbitration 00:11:14.876 CC examples/nvme/hotplug/hotplug.o 00:11:14.876 CC app/spdk_nvme_discover/discovery_aer.o 00:11:14.876 CXX test/cpp_headers/env.o 00:11:14.876 CC examples/accel/perf/accel_perf.o 00:11:14.876 LINK nvme_manage 00:11:15.135 CC examples/nvme/cmb_copy/cmb_copy.o 00:11:15.135 LINK nvme_dp 00:11:15.135 CXX test/cpp_headers/event.o 00:11:15.135 LINK spdk_nvme_discover 00:11:15.135 LINK hotplug 00:11:15.135 CC examples/nvme/abort/abort.o 00:11:15.135 LINK overhead 00:11:15.135 LINK cmb_copy 00:11:15.135 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:11:15.135 CXX test/cpp_headers/fd_group.o 00:11:15.395 CC examples/blob/cli/blobcli.o 00:11:15.395 CC examples/blob/hello_world/hello_blob.o 00:11:15.395 LINK pmr_persistence 00:11:15.395 CXX test/cpp_headers/fd.o 00:11:15.395 CC app/spdk_top/spdk_top.o 00:11:15.395 CC test/nvme/err_injection/err_injection.o 00:11:15.395 CC test/nvme/startup/startup.o 00:11:15.395 LINK accel_perf 00:11:15.395 CC test/nvme/reserve/reserve.o 00:11:15.653 LINK abort 00:11:15.653 CXX test/cpp_headers/file.o 00:11:15.653 LINK hello_blob 00:11:15.653 LINK err_injection 00:11:15.653 LINK startup 00:11:15.653 CC app/vhost/vhost.o 00:11:15.653 LINK reserve 00:11:15.653 CXX test/cpp_headers/ftl.o 00:11:15.653 CXX test/cpp_headers/gpt_spec.o 00:11:15.653 CXX test/cpp_headers/hexlify.o 00:11:15.653 CC app/spdk_dd/spdk_dd.o 00:11:15.912 LINK vhost 00:11:15.912 LINK blobcli 00:11:15.912 CXX test/cpp_headers/histogram_data.o 00:11:15.912 CC test/nvme/simple_copy/simple_copy.o 00:11:15.912 CC test/blobfs/mkfs/mkfs.o 00:11:15.912 CC examples/bdev/hello_world/hello_bdev.o 00:11:16.171 CXX test/cpp_headers/idxd.o 00:11:16.171 CC test/bdev/bdevio/bdevio.o 00:11:16.171 LINK spdk_dd 00:11:16.171 CC test/lvol/esnap/esnap.o 00:11:16.171 CC test/nvme/connect_stress/connect_stress.o 00:11:16.171 LINK mkfs 00:11:16.171 LINK simple_copy 00:11:16.171 CC app/fio/nvme/fio_plugin.o 00:11:16.171 CXX test/cpp_headers/idxd_spec.o 00:11:16.172 LINK hello_bdev 00:11:16.431 LINK spdk_top 00:11:16.431 CXX test/cpp_headers/init.o 00:11:16.431 LINK connect_stress 00:11:16.431 CXX test/cpp_headers/ioat.o 00:11:16.431 CXX test/cpp_headers/ioat_spec.o 00:11:16.431 CXX test/cpp_headers/iscsi_spec.o 00:11:16.431 CXX test/cpp_headers/json.o 00:11:16.431 LINK bdevio 00:11:16.431 CC examples/bdev/bdevperf/bdevperf.o 00:11:16.431 CC app/fio/bdev/fio_plugin.o 00:11:16.689 CC test/nvme/boot_partition/boot_partition.o 00:11:16.689 CXX test/cpp_headers/jsonrpc.o 00:11:16.689 CXX test/cpp_headers/keyring.o 00:11:16.689 CC test/nvme/compliance/nvme_compliance.o 00:11:16.689 CC test/nvme/fused_ordering/fused_ordering.o 00:11:16.689 LINK boot_partition 00:11:16.689 CXX test/cpp_headers/keyring_module.o 00:11:16.689 CC test/nvme/doorbell_aers/doorbell_aers.o 00:11:16.948 LINK spdk_nvme 00:11:16.948 CC test/nvme/fdp/fdp.o 00:11:16.948 CXX test/cpp_headers/likely.o 00:11:16.948 LINK fused_ordering 00:11:16.948 CXX test/cpp_headers/log.o 00:11:16.948 LINK doorbell_aers 00:11:16.948 CC test/nvme/cuse/cuse.o 00:11:16.948 LINK nvme_compliance 00:11:16.948 LINK spdk_bdev 00:11:17.208 CXX test/cpp_headers/lvol.o 00:11:17.208 CXX test/cpp_headers/memory.o 00:11:17.208 CXX test/cpp_headers/mmio.o 00:11:17.208 CXX test/cpp_headers/nbd.o 00:11:17.208 CXX test/cpp_headers/notify.o 00:11:17.208 CXX test/cpp_headers/nvme.o 00:11:17.208 CXX test/cpp_headers/nvme_intel.o 00:11:17.208 LINK fdp 00:11:17.208 CXX test/cpp_headers/nvme_ocssd.o 00:11:17.208 CXX test/cpp_headers/nvme_ocssd_spec.o 00:11:17.208 CXX test/cpp_headers/nvme_spec.o 00:11:17.466 LINK bdevperf 00:11:17.466 CXX test/cpp_headers/nvme_zns.o 00:11:17.466 CXX test/cpp_headers/nvmf_cmd.o 00:11:17.466 CXX test/cpp_headers/nvmf_fc_spec.o 00:11:17.466 CXX test/cpp_headers/nvmf.o 00:11:17.466 CXX test/cpp_headers/nvmf_spec.o 00:11:17.466 CXX test/cpp_headers/nvmf_transport.o 00:11:17.466 CXX test/cpp_headers/opal.o 00:11:17.466 CXX test/cpp_headers/opal_spec.o 00:11:17.466 CXX test/cpp_headers/pci_ids.o 00:11:17.466 CXX test/cpp_headers/pipe.o 00:11:17.725 CXX test/cpp_headers/queue.o 00:11:17.725 CXX test/cpp_headers/reduce.o 00:11:17.725 CXX test/cpp_headers/rpc.o 00:11:17.725 CXX test/cpp_headers/scheduler.o 00:11:17.725 CXX test/cpp_headers/scsi.o 00:11:17.725 CXX test/cpp_headers/scsi_spec.o 00:11:17.725 CXX test/cpp_headers/sock.o 00:11:17.725 CXX test/cpp_headers/stdinc.o 00:11:17.725 CC examples/nvmf/nvmf/nvmf.o 00:11:17.725 CXX test/cpp_headers/string.o 00:11:17.725 CXX test/cpp_headers/thread.o 00:11:17.725 CXX test/cpp_headers/trace.o 00:11:17.725 CXX test/cpp_headers/trace_parser.o 00:11:17.725 CXX test/cpp_headers/tree.o 00:11:17.983 CXX test/cpp_headers/ublk.o 00:11:17.983 CXX test/cpp_headers/util.o 00:11:17.983 CXX test/cpp_headers/uuid.o 00:11:17.983 CXX test/cpp_headers/version.o 00:11:17.983 CXX test/cpp_headers/vfio_user_pci.o 00:11:17.983 CXX test/cpp_headers/vfio_user_spec.o 00:11:17.983 CXX test/cpp_headers/vhost.o 00:11:17.983 CXX test/cpp_headers/vmd.o 00:11:17.983 CXX test/cpp_headers/xor.o 00:11:17.983 LINK nvmf 00:11:17.983 CXX test/cpp_headers/zipf.o 00:11:18.242 LINK cuse 00:11:22.452 LINK esnap 00:11:22.452 00:11:22.452 real 1m10.546s 00:11:22.452 user 6m9.367s 00:11:22.452 sys 1m52.691s 00:11:22.452 12:13:31 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:11:22.452 12:13:31 make -- common/autotest_common.sh@10 -- $ set +x 00:11:22.452 ************************************ 00:11:22.452 END TEST make 00:11:22.452 ************************************ 00:11:22.452 12:13:31 -- common/autotest_common.sh@1142 -- $ return 0 00:11:22.452 12:13:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:11:22.452 12:13:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:11:22.452 12:13:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:11:22.452 12:13:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:22.452 12:13:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:11:22.452 12:13:31 -- pm/common@44 -- $ pid=5176 00:11:22.452 12:13:31 -- pm/common@50 -- $ kill -TERM 5176 00:11:22.452 12:13:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:22.452 12:13:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:11:22.452 12:13:31 -- pm/common@44 -- $ pid=5178 00:11:22.452 12:13:31 -- pm/common@50 -- $ kill -TERM 5178 00:11:22.452 12:13:31 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:22.452 12:13:31 -- nvmf/common.sh@7 -- # uname -s 00:11:22.452 12:13:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.452 12:13:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.452 12:13:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.452 12:13:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.452 12:13:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.452 12:13:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.452 12:13:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.452 12:13:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.452 12:13:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.452 12:13:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.452 12:13:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02a694d1-0c30-4741-8b3e-64bbf390c556 00:11:22.452 12:13:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=02a694d1-0c30-4741-8b3e-64bbf390c556 00:11:22.452 12:13:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.452 12:13:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.452 12:13:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:22.452 12:13:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.452 12:13:31 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:22.452 12:13:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.452 12:13:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.452 12:13:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.452 12:13:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.452 12:13:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.452 12:13:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.452 12:13:31 -- paths/export.sh@5 -- # export PATH 00:11:22.452 12:13:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.452 12:13:31 -- nvmf/common.sh@47 -- # : 0 00:11:22.452 12:13:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:22.452 12:13:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:22.452 12:13:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.452 12:13:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.452 12:13:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.452 12:13:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:22.452 12:13:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:22.452 12:13:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:22.452 12:13:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:11:22.452 12:13:31 -- spdk/autotest.sh@32 -- # uname -s 00:11:22.452 12:13:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:11:22.452 12:13:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:11:22.452 12:13:31 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:11:22.452 12:13:31 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:11:22.452 12:13:31 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:11:22.452 12:13:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:11:22.452 12:13:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:11:22.452 12:13:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:11:22.452 12:13:31 -- spdk/autotest.sh@48 -- # udevadm_pid=53691 00:11:22.452 12:13:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:11:22.452 12:13:31 -- pm/common@17 -- # local monitor 00:11:22.452 12:13:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:11:22.452 12:13:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:11:22.452 12:13:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:11:22.453 12:13:31 -- pm/common@25 -- # sleep 1 00:11:22.453 12:13:31 -- pm/common@21 -- # date +%s 00:11:22.453 12:13:31 -- pm/common@21 -- # date +%s 00:11:22.453 12:13:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720613611 00:11:22.453 12:13:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720613611 00:11:22.453 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720613611_collect-vmstat.pm.log 00:11:22.453 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720613611_collect-cpu-load.pm.log 00:11:23.390 12:13:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:11:23.390 12:13:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:11:23.390 12:13:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:23.390 12:13:32 -- common/autotest_common.sh@10 -- # set +x 00:11:23.390 12:13:32 -- spdk/autotest.sh@59 -- # create_test_list 00:11:23.390 12:13:32 -- common/autotest_common.sh@746 -- # xtrace_disable 00:11:23.390 12:13:32 -- common/autotest_common.sh@10 -- # set +x 00:11:23.649 12:13:32 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:11:23.649 12:13:32 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:11:23.649 12:13:32 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:11:23.649 12:13:32 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:11:23.649 12:13:32 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:11:23.649 12:13:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:11:23.649 12:13:32 -- common/autotest_common.sh@1455 -- # uname 00:11:23.649 12:13:32 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:11:23.649 12:13:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:11:23.649 12:13:32 -- common/autotest_common.sh@1475 -- # uname 00:11:23.649 12:13:32 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:11:23.649 12:13:32 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:11:23.649 12:13:32 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:11:23.649 12:13:32 -- spdk/autotest.sh@72 -- # hash lcov 00:11:23.649 12:13:32 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:11:23.649 12:13:32 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:11:23.649 --rc lcov_branch_coverage=1 00:11:23.649 --rc lcov_function_coverage=1 00:11:23.649 --rc genhtml_branch_coverage=1 00:11:23.649 --rc genhtml_function_coverage=1 00:11:23.649 --rc genhtml_legend=1 00:11:23.649 --rc geninfo_all_blocks=1 00:11:23.649 ' 00:11:23.649 12:13:32 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:11:23.649 --rc lcov_branch_coverage=1 00:11:23.649 --rc lcov_function_coverage=1 00:11:23.649 --rc genhtml_branch_coverage=1 00:11:23.649 --rc genhtml_function_coverage=1 00:11:23.649 --rc genhtml_legend=1 00:11:23.649 --rc geninfo_all_blocks=1 00:11:23.649 ' 00:11:23.649 12:13:32 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:11:23.649 --rc lcov_branch_coverage=1 00:11:23.649 --rc lcov_function_coverage=1 00:11:23.649 --rc genhtml_branch_coverage=1 00:11:23.649 --rc genhtml_function_coverage=1 00:11:23.649 --rc genhtml_legend=1 00:11:23.649 --rc geninfo_all_blocks=1 00:11:23.649 --no-external' 00:11:23.649 12:13:32 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:11:23.649 --rc lcov_branch_coverage=1 00:11:23.649 --rc lcov_function_coverage=1 00:11:23.649 --rc genhtml_branch_coverage=1 00:11:23.649 --rc genhtml_function_coverage=1 00:11:23.649 --rc genhtml_legend=1 00:11:23.649 --rc geninfo_all_blocks=1 00:11:23.649 --no-external' 00:11:23.649 12:13:32 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:11:23.649 lcov: LCOV version 1.14 00:11:23.649 12:13:33 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:11:38.617 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:11:38.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:11:53.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:11:53.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:11:53.507 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:11:53.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:11:53.508 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:11:53.508 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:11:53.508 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:11:53.508 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:11:53.508 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:11:53.508 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:11:53.508 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:11:53.508 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:11:53.508 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:11:53.508 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:11:53.508 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:11:53.508 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:11:53.508 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:11:53.508 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:11:56.037 12:14:05 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:11:56.037 12:14:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:56.037 12:14:05 -- common/autotest_common.sh@10 -- # set +x 00:11:56.037 12:14:05 -- spdk/autotest.sh@91 -- # rm -f 00:11:56.037 12:14:05 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:56.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:57.230 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:11:57.230 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:11:57.231 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:11:57.231 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:11:57.231 12:14:06 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:11:57.231 12:14:06 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:11:57.231 12:14:06 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:11:57.231 12:14:06 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:11:57.231 12:14:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:57.231 12:14:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:11:57.231 12:14:06 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:11:57.231 12:14:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:57.231 12:14:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:57.231 12:14:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:57.231 12:14:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:11:57.231 12:14:06 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:11:57.231 12:14:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:57.231 12:14:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:57.231 12:14:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:57.231 12:14:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:11:57.231 12:14:06 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:11:57.231 12:14:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:57.231 12:14:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:57.231 12:14:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:57.231 12:14:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:11:57.231 12:14:06 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:11:57.231 12:14:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:11:57.231 12:14:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:57.231 12:14:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:57.231 12:14:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:11:57.231 12:14:06 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:11:57.231 12:14:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:11:57.231 12:14:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:57.231 12:14:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:57.231 12:14:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:11:57.231 12:14:06 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:11:57.231 12:14:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:57.231 12:14:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:57.231 12:14:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:57.231 12:14:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:11:57.231 12:14:06 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:11:57.231 12:14:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:57.231 12:14:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:57.231 12:14:06 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:11:57.231 12:14:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:57.231 12:14:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:57.231 12:14:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:11:57.231 12:14:06 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:11:57.231 12:14:06 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:57.231 No valid GPT data, bailing 00:11:57.231 12:14:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:57.231 12:14:06 -- scripts/common.sh@391 -- # pt= 00:11:57.231 12:14:06 -- scripts/common.sh@392 -- # return 1 00:11:57.231 12:14:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:11:57.231 1+0 records in 00:11:57.231 1+0 records out 00:11:57.231 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209629 s, 50.0 MB/s 00:11:57.231 12:14:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:57.231 12:14:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:57.231 12:14:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:11:57.231 12:14:06 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:11:57.231 12:14:06 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:11:57.489 No valid GPT data, bailing 00:11:57.489 12:14:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:11:57.489 12:14:06 -- scripts/common.sh@391 -- # pt= 00:11:57.489 12:14:06 -- scripts/common.sh@392 -- # return 1 00:11:57.489 12:14:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:11:57.489 1+0 records in 00:11:57.489 1+0 records out 00:11:57.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00610239 s, 172 MB/s 00:11:57.489 12:14:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:57.489 12:14:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:57.489 12:14:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:11:57.489 12:14:06 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:11:57.489 12:14:06 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:11:57.489 No valid GPT data, bailing 00:11:57.489 12:14:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:11:57.489 12:14:06 -- scripts/common.sh@391 -- # pt= 00:11:57.489 12:14:06 -- scripts/common.sh@392 -- # return 1 00:11:57.489 12:14:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:11:57.489 1+0 records in 00:11:57.489 1+0 records out 00:11:57.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00635062 s, 165 MB/s 00:11:57.489 12:14:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:57.489 12:14:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:57.489 12:14:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:11:57.489 12:14:06 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:11:57.489 12:14:06 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:11:57.490 No valid GPT data, bailing 00:11:57.490 12:14:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:11:57.490 12:14:06 -- scripts/common.sh@391 -- # pt= 00:11:57.490 12:14:06 -- scripts/common.sh@392 -- # return 1 00:11:57.490 12:14:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:11:57.490 1+0 records in 00:11:57.490 1+0 records out 00:11:57.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00531336 s, 197 MB/s 00:11:57.490 12:14:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:57.490 12:14:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:57.490 12:14:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:11:57.490 12:14:06 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:11:57.490 12:14:06 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:11:57.490 No valid GPT data, bailing 00:11:57.748 12:14:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:11:57.748 12:14:06 -- scripts/common.sh@391 -- # pt= 00:11:57.748 12:14:06 -- scripts/common.sh@392 -- # return 1 00:11:57.748 12:14:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:11:57.748 1+0 records in 00:11:57.748 1+0 records out 00:11:57.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627772 s, 167 MB/s 00:11:57.748 12:14:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:57.748 12:14:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:57.748 12:14:07 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:11:57.748 12:14:07 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:11:57.748 12:14:07 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:11:57.748 No valid GPT data, bailing 00:11:57.748 12:14:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:11:57.748 12:14:07 -- scripts/common.sh@391 -- # pt= 00:11:57.748 12:14:07 -- scripts/common.sh@392 -- # return 1 00:11:57.748 12:14:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:11:57.748 1+0 records in 00:11:57.748 1+0 records out 00:11:57.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441782 s, 237 MB/s 00:11:57.748 12:14:07 -- spdk/autotest.sh@118 -- # sync 00:11:57.748 12:14:07 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:11:57.748 12:14:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:11:57.748 12:14:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:12:01.036 12:14:09 -- spdk/autotest.sh@124 -- # uname -s 00:12:01.036 12:14:09 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:12:01.036 12:14:09 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:12:01.036 12:14:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:01.036 12:14:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.036 12:14:09 -- common/autotest_common.sh@10 -- # set +x 00:12:01.036 ************************************ 00:12:01.036 START TEST setup.sh 00:12:01.036 ************************************ 00:12:01.036 12:14:09 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:12:01.036 * Looking for test storage... 00:12:01.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:01.036 12:14:10 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:12:01.036 12:14:10 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:12:01.036 12:14:10 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:12:01.036 12:14:10 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:01.036 12:14:10 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.036 12:14:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:01.036 ************************************ 00:12:01.036 START TEST acl 00:12:01.036 ************************************ 00:12:01.036 12:14:10 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:12:01.036 * Looking for test storage... 00:12:01.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:01.036 12:14:10 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:12:01.036 12:14:10 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:12:01.036 12:14:10 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:12:01.036 12:14:10 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:01.037 12:14:10 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:01.037 12:14:10 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:12:01.037 12:14:10 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:12:01.037 12:14:10 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:12:01.037 12:14:10 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:12:01.037 12:14:10 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:12:01.037 12:14:10 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:01.037 12:14:10 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:02.413 12:14:11 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:12:02.413 12:14:11 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:12:02.413 12:14:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:02.413 12:14:11 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:12:02.413 12:14:11 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:12:02.413 12:14:11 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:02.981 12:14:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:12:02.981 12:14:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:12:02.981 12:14:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:03.549 Hugepages 00:12:03.549 node hugesize free / total 00:12:03.549 12:14:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:12:03.549 12:14:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:12:03.549 12:14:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:03.549 00:12:03.549 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:03.549 12:14:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:12:03.549 12:14:12 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:12:03.549 12:14:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:03.808 12:14:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:12:03.808 12:14:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:12:03.808 12:14:13 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:12:03.808 12:14:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:03.808 12:14:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:12:03.808 12:14:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:12:03.808 12:14:13 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:12:03.808 12:14:13 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:12:03.808 12:14:13 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:12:03.808 12:14:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:04.066 12:14:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:12:04.067 12:14:13 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:12:04.067 12:14:13 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:04.067 12:14:13 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.067 12:14:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:12:04.326 ************************************ 00:12:04.326 START TEST denied 00:12:04.326 ************************************ 00:12:04.326 12:14:13 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:12:04.326 12:14:13 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:12:04.326 12:14:13 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:12:04.326 12:14:13 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:12:04.326 12:14:13 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:04.326 12:14:13 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:12:05.719 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:12:05.719 12:14:15 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:12:05.719 12:14:15 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:12:05.719 12:14:15 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:12:05.719 12:14:15 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:12:05.719 12:14:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:12:05.719 12:14:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:12:05.719 12:14:15 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:12:05.719 12:14:15 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:12:05.719 12:14:15 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:05.719 12:14:15 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:12.306 00:12:12.306 real 0m8.007s 00:12:12.306 user 0m1.053s 00:12:12.306 sys 0m2.070s 00:12:12.306 12:14:21 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:12.306 ************************************ 00:12:12.306 END TEST denied 00:12:12.306 ************************************ 00:12:12.306 12:14:21 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:12:12.306 12:14:21 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:12:12.306 12:14:21 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:12:12.306 12:14:21 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:12.306 12:14:21 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.306 12:14:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:12:12.306 ************************************ 00:12:12.306 START TEST allowed 00:12:12.306 ************************************ 00:12:12.306 12:14:21 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:12:12.306 12:14:21 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:12:12.306 12:14:21 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:12:12.306 12:14:21 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:12:12.306 12:14:21 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:12:12.306 12:14:21 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:13.683 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:12:13.683 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:12:13.684 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:12:13.684 12:14:23 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:12:13.684 12:14:23 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:13.684 12:14:23 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:15.060 00:12:15.060 real 0m2.896s 00:12:15.060 user 0m1.156s 00:12:15.060 sys 0m1.769s 00:12:15.060 12:14:24 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.060 12:14:24 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:12:15.060 ************************************ 00:12:15.060 END TEST allowed 00:12:15.060 ************************************ 00:12:15.319 12:14:24 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:12:15.319 ************************************ 00:12:15.319 END TEST acl 00:12:15.319 ************************************ 00:12:15.319 00:12:15.319 real 0m14.498s 00:12:15.319 user 0m3.685s 00:12:15.319 sys 0m5.997s 00:12:15.319 12:14:24 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.319 12:14:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:12:15.319 12:14:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:12:15.319 12:14:24 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:12:15.319 12:14:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:15.319 12:14:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.319 12:14:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:15.319 ************************************ 00:12:15.320 START TEST hugepages 00:12:15.320 ************************************ 00:12:15.320 12:14:24 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:12:15.320 * Looking for test storage... 00:12:15.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5771520 kB' 'MemAvailable: 7370736 kB' 'Buffers: 2436 kB' 'Cached: 1812488 kB' 'SwapCached: 0 kB' 'Active: 450424 kB' 'Inactive: 1472444 kB' 'Active(anon): 118456 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472444 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 109580 kB' 'Mapped: 48732 kB' 'Shmem: 10512 kB' 'KReclaimable: 63488 kB' 'Slab: 139964 kB' 'SReclaimable: 63488 kB' 'SUnreclaim: 76476 kB' 'KernelStack: 6236 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 332820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.320 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.580 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.581 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:12:15.582 12:14:24 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:12:15.582 12:14:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:15.582 12:14:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.582 12:14:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:15.582 ************************************ 00:12:15.582 START TEST default_setup 00:12:15.582 ************************************ 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:12:15.582 12:14:24 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:16.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:17.131 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.131 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.131 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.131 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:17.131 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7833392 kB' 'MemAvailable: 9432368 kB' 'Buffers: 2436 kB' 'Cached: 1812476 kB' 'SwapCached: 0 kB' 'Active: 462556 kB' 'Inactive: 1472472 kB' 'Active(anon): 130588 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121744 kB' 'Mapped: 48840 kB' 'Shmem: 10472 kB' 'KReclaimable: 62948 kB' 'Slab: 139332 kB' 'SReclaimable: 62948 kB' 'SUnreclaim: 76384 kB' 'KernelStack: 6320 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.132 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7833144 kB' 'MemAvailable: 9432120 kB' 'Buffers: 2436 kB' 'Cached: 1812476 kB' 'SwapCached: 0 kB' 'Active: 462316 kB' 'Inactive: 1472472 kB' 'Active(anon): 130348 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121416 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 62948 kB' 'Slab: 139332 kB' 'SReclaimable: 62948 kB' 'SUnreclaim: 76384 kB' 'KernelStack: 6272 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.133 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.134 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7833300 kB' 'MemAvailable: 9432276 kB' 'Buffers: 2436 kB' 'Cached: 1812476 kB' 'SwapCached: 0 kB' 'Active: 462300 kB' 'Inactive: 1472472 kB' 'Active(anon): 130332 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121404 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 62948 kB' 'Slab: 139328 kB' 'SReclaimable: 62948 kB' 'SUnreclaim: 76380 kB' 'KernelStack: 6272 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.135 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.136 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:12:17.137 nr_hugepages=1024 00:12:17.137 resv_hugepages=0 00:12:17.137 surplus_hugepages=0 00:12:17.137 anon_hugepages=0 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:17.137 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:17.138 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:17.138 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:17.138 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:17.138 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:17.138 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:17.138 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.138 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7833300 kB' 'MemAvailable: 9432276 kB' 'Buffers: 2436 kB' 'Cached: 1812476 kB' 'SwapCached: 0 kB' 'Active: 462300 kB' 'Inactive: 1472472 kB' 'Active(anon): 130332 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121404 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 62948 kB' 'Slab: 139328 kB' 'SReclaimable: 62948 kB' 'SUnreclaim: 76380 kB' 'KernelStack: 6272 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:17.138 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.138 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.138 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.138 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.138 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.399 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7832796 kB' 'MemUsed: 4409184 kB' 'SwapCached: 0 kB' 'Active: 462304 kB' 'Inactive: 1472472 kB' 'Active(anon): 130336 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 1814912 kB' 'Mapped: 48736 kB' 'AnonPages: 121404 kB' 'Shmem: 10472 kB' 'KernelStack: 6272 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62948 kB' 'Slab: 139324 kB' 'SReclaimable: 62948 kB' 'SUnreclaim: 76376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.400 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:12:17.401 node0=1024 expecting 1024 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:17.401 00:12:17.401 real 0m1.837s 00:12:17.401 user 0m0.663s 00:12:17.401 sys 0m1.118s 00:12:17.401 ************************************ 00:12:17.401 END TEST default_setup 00:12:17.401 ************************************ 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:17.401 12:14:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:12:17.401 12:14:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:12:17.401 12:14:26 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:12:17.401 12:14:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:17.401 12:14:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.401 12:14:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:17.401 ************************************ 00:12:17.401 START TEST per_node_1G_alloc 00:12:17.401 ************************************ 00:12:17.401 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:12:17.401 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:12:17.401 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:12:17.401 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:12:17.401 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:17.401 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:12:17.401 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:17.401 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:12:17.401 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:17.402 12:14:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:17.969 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:18.231 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:18.231 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:18.231 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:18.231 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8897796 kB' 'MemAvailable: 10496780 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 462688 kB' 'Inactive: 1472476 kB' 'Active(anon): 130720 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121784 kB' 'Mapped: 48800 kB' 'Shmem: 10472 kB' 'KReclaimable: 62960 kB' 'Slab: 139312 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76352 kB' 'KernelStack: 6296 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.231 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.232 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8897548 kB' 'MemAvailable: 10496532 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 462408 kB' 'Inactive: 1472476 kB' 'Active(anon): 130440 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121592 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 62960 kB' 'Slab: 139320 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76360 kB' 'KernelStack: 6304 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.233 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.234 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:18.235 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8897548 kB' 'MemAvailable: 10496532 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 462408 kB' 'Inactive: 1472476 kB' 'Active(anon): 130440 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121556 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 62960 kB' 'Slab: 139316 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76356 kB' 'KernelStack: 6288 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.236 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.237 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:18.238 nr_hugepages=512 00:12:18.238 resv_hugepages=0 00:12:18.238 surplus_hugepages=0 00:12:18.238 anon_hugepages=0 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:12:18.238 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.239 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8897800 kB' 'MemAvailable: 10496784 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 462324 kB' 'Inactive: 1472476 kB' 'Active(anon): 130356 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121456 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 62960 kB' 'Slab: 139316 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76356 kB' 'KernelStack: 6272 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:18.500 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8898448 kB' 'MemUsed: 3343532 kB' 'SwapCached: 0 kB' 'Active: 462324 kB' 'Inactive: 1472476 kB' 'Active(anon): 130356 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1814916 kB' 'Mapped: 48736 kB' 'AnonPages: 121452 kB' 'Shmem: 10472 kB' 'KernelStack: 6272 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62960 kB' 'Slab: 139316 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.501 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.502 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:12:18.502 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:18.502 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:18.502 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:18.502 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:12:18.502 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:12:18.502 node0=512 expecting 512 00:12:18.502 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:18.502 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:18.502 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:18.502 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:18.502 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:12:18.502 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:12:18.502 ************************************ 00:12:18.502 END TEST per_node_1G_alloc 00:12:18.502 ************************************ 00:12:18.502 00:12:18.502 real 0m0.969s 00:12:18.502 user 0m0.409s 00:12:18.502 sys 0m0.606s 00:12:18.502 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.502 12:14:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:18.502 12:14:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:12:18.502 12:14:27 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:12:18.502 12:14:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:18.502 12:14:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.502 12:14:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:18.502 ************************************ 00:12:18.502 START TEST even_2G_alloc 00:12:18.502 ************************************ 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:18.502 12:14:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:19.069 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:19.332 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:19.332 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:19.332 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:19.332 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7916780 kB' 'MemAvailable: 9515760 kB' 'Buffers: 2436 kB' 'Cached: 1812476 kB' 'SwapCached: 0 kB' 'Active: 462644 kB' 'Inactive: 1472472 kB' 'Active(anon): 130676 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121740 kB' 'Mapped: 48916 kB' 'Shmem: 10472 kB' 'KReclaimable: 62960 kB' 'Slab: 139320 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76360 kB' 'KernelStack: 6244 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.332 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.333 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7917128 kB' 'MemAvailable: 9516108 kB' 'Buffers: 2436 kB' 'Cached: 1812476 kB' 'SwapCached: 0 kB' 'Active: 462672 kB' 'Inactive: 1472472 kB' 'Active(anon): 130704 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121824 kB' 'Mapped: 48916 kB' 'Shmem: 10472 kB' 'KReclaimable: 62960 kB' 'Slab: 139320 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76360 kB' 'KernelStack: 6312 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.334 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.335 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7917012 kB' 'MemAvailable: 9515992 kB' 'Buffers: 2436 kB' 'Cached: 1812476 kB' 'SwapCached: 0 kB' 'Active: 462692 kB' 'Inactive: 1472472 kB' 'Active(anon): 130724 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121796 kB' 'Mapped: 48916 kB' 'Shmem: 10472 kB' 'KReclaimable: 62960 kB' 'Slab: 139320 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76360 kB' 'KernelStack: 6296 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.336 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:19.337 nr_hugepages=1024 00:12:19.337 resv_hugepages=0 00:12:19.337 surplus_hugepages=0 00:12:19.337 anon_hugepages=0 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:19.337 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7917448 kB' 'MemAvailable: 9516428 kB' 'Buffers: 2436 kB' 'Cached: 1812476 kB' 'SwapCached: 0 kB' 'Active: 462332 kB' 'Inactive: 1472472 kB' 'Active(anon): 130364 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 121464 kB' 'Mapped: 48784 kB' 'Shmem: 10472 kB' 'KReclaimable: 62960 kB' 'Slab: 139348 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76388 kB' 'KernelStack: 6272 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.338 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7917200 kB' 'MemUsed: 4324780 kB' 'SwapCached: 0 kB' 'Active: 462336 kB' 'Inactive: 1472472 kB' 'Active(anon): 130368 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472472 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1814912 kB' 'Mapped: 48784 kB' 'AnonPages: 121464 kB' 'Shmem: 10472 kB' 'KernelStack: 6272 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62960 kB' 'Slab: 139348 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.339 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:19.340 node0=1024 expecting 1024 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:19.340 00:12:19.340 real 0m0.986s 00:12:19.340 user 0m0.426s 00:12:19.340 sys 0m0.605s 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.340 ************************************ 00:12:19.340 END TEST even_2G_alloc 00:12:19.340 ************************************ 00:12:19.340 12:14:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:19.599 12:14:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:12:19.599 12:14:28 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:12:19.599 12:14:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:19.599 12:14:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.599 12:14:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:19.599 ************************************ 00:12:19.599 START TEST odd_alloc 00:12:19.599 ************************************ 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:19.599 12:14:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:20.165 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:20.165 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:20.165 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:20.165 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:20.165 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7917828 kB' 'MemAvailable: 9516812 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 462612 kB' 'Inactive: 1472476 kB' 'Active(anon): 130644 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121484 kB' 'Mapped: 48864 kB' 'Shmem: 10472 kB' 'KReclaimable: 62960 kB' 'Slab: 139376 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76416 kB' 'KernelStack: 6288 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.427 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:20.428 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7917828 kB' 'MemAvailable: 9516812 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 462412 kB' 'Inactive: 1472476 kB' 'Active(anon): 130444 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121600 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 62960 kB' 'Slab: 139376 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76416 kB' 'KernelStack: 6288 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.429 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7918080 kB' 'MemAvailable: 9517064 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 462360 kB' 'Inactive: 1472476 kB' 'Active(anon): 130392 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121492 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 62960 kB' 'Slab: 139376 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76416 kB' 'KernelStack: 6272 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.430 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.431 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.432 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:20.433 nr_hugepages=1025 00:12:20.433 resv_hugepages=0 00:12:20.433 surplus_hugepages=0 00:12:20.433 anon_hugepages=0 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7919348 kB' 'MemAvailable: 9518332 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 462352 kB' 'Inactive: 1472476 kB' 'Active(anon): 130384 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121484 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 62960 kB' 'Slab: 139372 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76412 kB' 'KernelStack: 6272 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.433 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.434 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7919348 kB' 'MemUsed: 4322632 kB' 'SwapCached: 0 kB' 'Active: 462356 kB' 'Inactive: 1472476 kB' 'Active(anon): 130388 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1814916 kB' 'Mapped: 48736 kB' 'AnonPages: 121484 kB' 'Shmem: 10472 kB' 'KernelStack: 6272 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62960 kB' 'Slab: 139368 kB' 'SReclaimable: 62960 kB' 'SUnreclaim: 76408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.435 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:20.436 node0=1025 expecting 1025 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:12:20.436 00:12:20.436 real 0m1.013s 00:12:20.436 user 0m0.428s 00:12:20.436 sys 0m0.620s 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:20.436 ************************************ 00:12:20.436 END TEST odd_alloc 00:12:20.436 ************************************ 00:12:20.436 12:14:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:20.694 12:14:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:12:20.694 12:14:29 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:12:20.694 12:14:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:20.694 12:14:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.694 12:14:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:20.694 ************************************ 00:12:20.694 START TEST custom_alloc 00:12:20.694 ************************************ 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:20.695 12:14:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:21.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:21.263 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:21.263 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:21.263 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:21.263 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:21.263 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:12:21.263 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:12:21.263 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:12:21.263 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:21.263 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:21.263 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:21.263 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:21.263 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8973296 kB' 'MemAvailable: 10572280 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 459232 kB' 'Inactive: 1472476 kB' 'Active(anon): 127264 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118380 kB' 'Mapped: 48268 kB' 'Shmem: 10472 kB' 'KReclaimable: 62956 kB' 'Slab: 139324 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76368 kB' 'KernelStack: 6264 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 336372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.527 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.528 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8973296 kB' 'MemAvailable: 10572280 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 459192 kB' 'Inactive: 1472476 kB' 'Active(anon): 127224 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118328 kB' 'Mapped: 48176 kB' 'Shmem: 10472 kB' 'KReclaimable: 62956 kB' 'Slab: 139320 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76364 kB' 'KernelStack: 6232 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 336372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.529 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.530 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8973296 kB' 'MemAvailable: 10572280 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 459044 kB' 'Inactive: 1472476 kB' 'Active(anon): 127076 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118440 kB' 'Mapped: 48176 kB' 'Shmem: 10472 kB' 'KReclaimable: 62956 kB' 'Slab: 139320 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76364 kB' 'KernelStack: 6248 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 336372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.531 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.532 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.533 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:21.534 nr_hugepages=512 00:12:21.534 resv_hugepages=0 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:21.534 surplus_hugepages=0 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:21.534 anon_hugepages=0 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8973296 kB' 'MemAvailable: 10572280 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 459168 kB' 'Inactive: 1472476 kB' 'Active(anon): 127200 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118296 kB' 'Mapped: 48176 kB' 'Shmem: 10472 kB' 'KReclaimable: 62956 kB' 'Slab: 139320 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76364 kB' 'KernelStack: 6232 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 336372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.534 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.535 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8973044 kB' 'MemUsed: 3268936 kB' 'SwapCached: 0 kB' 'Active: 459296 kB' 'Inactive: 1472476 kB' 'Active(anon): 127328 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1814916 kB' 'Mapped: 48176 kB' 'AnonPages: 118416 kB' 'Shmem: 10472 kB' 'KernelStack: 6216 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62956 kB' 'Slab: 139320 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.536 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.537 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:21.538 node0=512 expecting 512 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:12:21.538 ************************************ 00:12:21.538 END TEST custom_alloc 00:12:21.538 ************************************ 00:12:21.538 00:12:21.538 real 0m0.989s 00:12:21.538 user 0m0.408s 00:12:21.538 sys 0m0.612s 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:21.538 12:14:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:21.538 12:14:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:12:21.538 12:14:30 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:12:21.538 12:14:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:21.538 12:14:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.538 12:14:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:21.796 ************************************ 00:12:21.796 START TEST no_shrink_alloc 00:12:21.796 ************************************ 00:12:21.796 12:14:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:12:21.796 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:12:21.796 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:21.797 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:22.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:22.313 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:22.313 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:22.313 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:22.313 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7924980 kB' 'MemAvailable: 9523964 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 459408 kB' 'Inactive: 1472476 kB' 'Active(anon): 127440 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118504 kB' 'Mapped: 47996 kB' 'Shmem: 10472 kB' 'KReclaimable: 62956 kB' 'Slab: 139276 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76320 kB' 'KernelStack: 6176 kB' 'PageTables: 3676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.577 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.578 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7924980 kB' 'MemAvailable: 9523964 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 459180 kB' 'Inactive: 1472476 kB' 'Active(anon): 127212 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118312 kB' 'Mapped: 47996 kB' 'Shmem: 10472 kB' 'KReclaimable: 62956 kB' 'Slab: 139276 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76320 kB' 'KernelStack: 6192 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.579 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.580 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.581 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7927632 kB' 'MemAvailable: 9526616 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 459272 kB' 'Inactive: 1472476 kB' 'Active(anon): 127304 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118400 kB' 'Mapped: 47996 kB' 'Shmem: 10472 kB' 'KReclaimable: 62956 kB' 'Slab: 139276 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76320 kB' 'KernelStack: 6208 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.582 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.583 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:22.584 nr_hugepages=1024 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:22.584 resv_hugepages=0 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:22.584 surplus_hugepages=0 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:22.584 anon_hugepages=0 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7927632 kB' 'MemAvailable: 9526616 kB' 'Buffers: 2436 kB' 'Cached: 1812480 kB' 'SwapCached: 0 kB' 'Active: 459164 kB' 'Inactive: 1472476 kB' 'Active(anon): 127196 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 118340 kB' 'Mapped: 47996 kB' 'Shmem: 10472 kB' 'KReclaimable: 62956 kB' 'Slab: 139276 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76320 kB' 'KernelStack: 6192 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.584 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.585 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7927888 kB' 'MemUsed: 4314092 kB' 'SwapCached: 0 kB' 'Active: 459164 kB' 'Inactive: 1472476 kB' 'Active(anon): 127196 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472476 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1814916 kB' 'Mapped: 47996 kB' 'AnonPages: 118296 kB' 'Shmem: 10472 kB' 'KernelStack: 6192 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62956 kB' 'Slab: 139276 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.586 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.587 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.588 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.588 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.588 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.588 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.588 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.588 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.588 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.588 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.588 12:14:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:22.588 node0=1024 expecting 1024 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:22.588 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:23.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:23.416 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:23.416 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:23.416 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:23.416 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:23.416 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7921908 kB' 'MemAvailable: 9520896 kB' 'Buffers: 2436 kB' 'Cached: 1812484 kB' 'SwapCached: 0 kB' 'Active: 459532 kB' 'Inactive: 1472480 kB' 'Active(anon): 127564 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118628 kB' 'Mapped: 48124 kB' 'Shmem: 10472 kB' 'KReclaimable: 62956 kB' 'Slab: 139240 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76284 kB' 'KernelStack: 6200 kB' 'PageTables: 3640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.416 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.417 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7921908 kB' 'MemAvailable: 9520896 kB' 'Buffers: 2436 kB' 'Cached: 1812484 kB' 'SwapCached: 0 kB' 'Active: 459264 kB' 'Inactive: 1472480 kB' 'Active(anon): 127296 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118392 kB' 'Mapped: 48000 kB' 'Shmem: 10472 kB' 'KReclaimable: 62956 kB' 'Slab: 139236 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76280 kB' 'KernelStack: 6192 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.418 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.419 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7929972 kB' 'MemAvailable: 9528960 kB' 'Buffers: 2436 kB' 'Cached: 1812484 kB' 'SwapCached: 0 kB' 'Active: 459384 kB' 'Inactive: 1472480 kB' 'Active(anon): 127416 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118512 kB' 'Mapped: 48000 kB' 'Shmem: 10472 kB' 'KReclaimable: 62956 kB' 'Slab: 139236 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76280 kB' 'KernelStack: 6192 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.420 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.421 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.422 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:23.684 nr_hugepages=1024 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:23.684 resv_hugepages=0 00:12:23.684 surplus_hugepages=0 00:12:23.684 anon_hugepages=0 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7929972 kB' 'MemAvailable: 9528960 kB' 'Buffers: 2436 kB' 'Cached: 1812484 kB' 'SwapCached: 0 kB' 'Active: 459188 kB' 'Inactive: 1472480 kB' 'Active(anon): 127220 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118320 kB' 'Mapped: 48000 kB' 'Shmem: 10472 kB' 'KReclaimable: 62956 kB' 'Slab: 139236 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76280 kB' 'KernelStack: 6192 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.684 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.685 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7929720 kB' 'MemUsed: 4312260 kB' 'SwapCached: 0 kB' 'Active: 459196 kB' 'Inactive: 1472480 kB' 'Active(anon): 127228 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1472480 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1814920 kB' 'Mapped: 48000 kB' 'AnonPages: 118320 kB' 'Shmem: 10472 kB' 'KernelStack: 6192 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62956 kB' 'Slab: 139236 kB' 'SReclaimable: 62956 kB' 'SUnreclaim: 76280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.686 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.687 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:23.688 node0=1024 expecting 1024 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:23.688 ************************************ 00:12:23.688 END TEST no_shrink_alloc 00:12:23.688 ************************************ 00:12:23.688 00:12:23.688 real 0m1.959s 00:12:23.688 user 0m0.826s 00:12:23.688 sys 0m1.222s 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:23.688 12:14:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:23.688 12:14:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:12:23.688 12:14:33 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:12:23.688 12:14:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:12:23.688 12:14:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:12:23.688 12:14:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:23.688 12:14:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:23.688 12:14:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:23.688 12:14:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:23.688 12:14:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:12:23.688 12:14:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:12:23.688 00:12:23.688 real 0m8.392s 00:12:23.688 user 0m3.396s 00:12:23.688 sys 0m5.160s 00:12:23.688 12:14:33 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:23.688 12:14:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:23.688 ************************************ 00:12:23.688 END TEST hugepages 00:12:23.688 ************************************ 00:12:23.688 12:14:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:12:23.688 12:14:33 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:12:23.688 12:14:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:23.688 12:14:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.688 12:14:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:23.688 ************************************ 00:12:23.688 START TEST driver 00:12:23.688 ************************************ 00:12:23.688 12:14:33 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:12:23.947 * Looking for test storage... 00:12:23.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:23.947 12:14:33 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:12:23.947 12:14:33 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:23.947 12:14:33 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:30.507 12:14:39 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:12:30.507 12:14:39 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:30.507 12:14:39 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.507 12:14:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:12:30.507 ************************************ 00:12:30.507 START TEST guess_driver 00:12:30.507 ************************************ 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:12:30.507 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:12:30.507 Looking for driver=uio_pci_generic 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:12:30.507 12:14:39 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:30.766 12:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:12:30.766 12:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:12:30.766 12:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:31.742 12:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:31.742 12:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:31.742 12:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:31.742 12:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:31.742 12:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:31.742 12:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:31.742 12:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:31.742 12:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:31.742 12:14:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:31.742 12:14:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:31.742 12:14:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:31.742 12:14:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:31.742 12:14:41 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:12:31.742 12:14:41 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:12:31.742 12:14:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:31.742 12:14:41 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:38.308 00:12:38.308 real 0m7.852s 00:12:38.308 user 0m0.988s 00:12:38.308 sys 0m2.039s 00:12:38.308 12:14:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:38.308 ************************************ 00:12:38.308 END TEST guess_driver 00:12:38.308 ************************************ 00:12:38.308 12:14:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:12:38.308 12:14:47 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:12:38.308 00:12:38.308 real 0m14.288s 00:12:38.308 user 0m1.463s 00:12:38.308 sys 0m3.172s 00:12:38.308 ************************************ 00:12:38.308 END TEST driver 00:12:38.308 ************************************ 00:12:38.308 12:14:47 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:38.308 12:14:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:12:38.308 12:14:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:12:38.308 12:14:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:12:38.308 12:14:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:38.308 12:14:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.308 12:14:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:38.308 ************************************ 00:12:38.308 START TEST devices 00:12:38.308 ************************************ 00:12:38.308 12:14:47 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:12:38.308 * Looking for test storage... 00:12:38.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:38.308 12:14:47 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:12:38.308 12:14:47 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:12:38.308 12:14:47 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:38.308 12:14:47 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:39.734 12:14:49 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:39.735 12:14:49 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:39.735 12:14:49 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:12:39.735 12:14:49 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:12:39.735 12:14:49 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:12:39.735 12:14:49 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:12:39.735 12:14:49 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:12:39.735 12:14:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:39.735 12:14:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:12:39.735 12:14:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:39.735 12:14:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:12:39.735 12:14:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:39.735 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:12:39.735 12:14:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:12:39.735 12:14:49 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:12:39.735 No valid GPT data, bailing 00:12:39.735 12:14:49 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:39.994 12:14:49 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:39.994 12:14:49 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:12:39.994 12:14:49 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:39.994 12:14:49 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:39.994 12:14:49 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:12:39.994 12:14:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:12:39.994 12:14:49 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:12:39.994 No valid GPT data, bailing 00:12:39.994 12:14:49 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:12:39.994 12:14:49 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:39.994 12:14:49 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:12:39.994 12:14:49 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:12:39.994 12:14:49 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:12:39.994 12:14:49 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:12:39.994 12:14:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:12:39.994 12:14:49 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:12:39.994 No valid GPT data, bailing 00:12:39.994 12:14:49 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:12:39.994 12:14:49 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:39.994 12:14:49 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:39.994 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:12:39.995 12:14:49 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:12:39.995 12:14:49 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:12:39.995 12:14:49 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:12:39.995 12:14:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:12:39.995 12:14:49 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:12:39.995 No valid GPT data, bailing 00:12:39.995 12:14:49 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:12:39.995 12:14:49 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:39.995 12:14:49 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:12:39.995 12:14:49 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:12:39.995 12:14:49 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:12:39.995 12:14:49 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:12:39.995 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:12:39.995 12:14:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:12:39.995 12:14:49 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:12:40.254 No valid GPT data, bailing 00:12:40.254 12:14:49 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:12:40.254 12:14:49 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:40.254 12:14:49 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:12:40.254 12:14:49 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:12:40.254 12:14:49 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:12:40.254 12:14:49 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:12:40.254 12:14:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:12:40.254 12:14:49 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:12:40.254 No valid GPT data, bailing 00:12:40.254 12:14:49 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:12:40.254 12:14:49 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:40.254 12:14:49 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:12:40.254 12:14:49 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:12:40.254 12:14:49 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:12:40.254 12:14:49 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:12:40.254 12:14:49 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:12:40.254 12:14:49 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:40.254 12:14:49 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:40.254 12:14:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:40.254 ************************************ 00:12:40.254 START TEST nvme_mount 00:12:40.254 ************************************ 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:12:40.254 12:14:49 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:12:41.631 Creating new GPT entries in memory. 00:12:41.631 GPT data structures destroyed! You may now partition the disk using fdisk or 00:12:41.631 other utilities. 00:12:41.631 12:14:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:12:41.631 12:14:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:41.631 12:14:50 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:41.631 12:14:50 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:41.631 12:14:50 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:12:42.568 Creating new GPT entries in memory. 00:12:42.568 The operation has completed successfully. 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59457 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:42.568 12:14:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:42.826 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:42.826 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:12:42.826 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:42.826 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:42.826 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:42.826 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:42.826 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:42.826 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.086 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:43.086 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.086 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:43.086 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.344 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:43.344 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.602 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:43.602 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:12:43.602 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:43.602 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:43.602 12:14:52 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:43.602 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:12:43.602 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:43.602 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:43.602 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:43.602 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:43.602 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:43.602 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:43.602 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:43.860 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:43.860 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:43.860 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:43.860 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:43.860 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:12:43.860 12:14:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:12:43.860 12:14:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:43.860 12:14:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:12:43.860 12:14:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:44.119 12:14:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:44.379 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:44.379 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:12:44.379 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:44.379 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:44.379 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:44.379 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:44.637 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:44.637 12:14:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:44.637 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:44.637 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:44.638 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:44.638 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:44.897 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:44.897 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:45.156 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:45.156 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:12:45.156 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:45.156 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:45.156 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:45.156 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:45.416 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:12:45.416 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:45.416 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:12:45.416 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:12:45.416 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:12:45.416 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:45.416 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:45.416 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:45.416 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:45.416 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:45.416 12:14:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:45.416 12:14:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:45.416 12:14:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:45.675 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:45.675 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:12:45.675 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:45.675 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:45.675 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:45.675 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:45.934 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:45.934 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:45.934 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:45.934 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:45.934 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:45.934 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:46.503 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:46.503 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:46.503 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:46.503 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:46.503 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:12:46.503 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:12:46.503 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:46.503 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:46.503 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:46.503 12:14:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:46.503 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:46.503 00:12:46.503 real 0m6.336s 00:12:46.503 user 0m1.659s 00:12:46.503 sys 0m2.386s 00:12:46.503 12:14:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:46.503 12:14:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:12:46.503 ************************************ 00:12:46.503 END TEST nvme_mount 00:12:46.503 ************************************ 00:12:46.762 12:14:56 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:12:46.762 12:14:56 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:12:46.762 12:14:56 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:46.762 12:14:56 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.762 12:14:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:46.762 ************************************ 00:12:46.762 START TEST dm_mount 00:12:46.762 ************************************ 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:12:46.762 12:14:56 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:12:47.698 Creating new GPT entries in memory. 00:12:47.698 GPT data structures destroyed! You may now partition the disk using fdisk or 00:12:47.698 other utilities. 00:12:47.698 12:14:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:12:47.698 12:14:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:47.698 12:14:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:47.698 12:14:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:47.698 12:14:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:12:48.633 Creating new GPT entries in memory. 00:12:48.633 The operation has completed successfully. 00:12:48.891 12:14:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:48.891 12:14:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:48.891 12:14:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:48.891 12:14:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:48.891 12:14:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:12:49.826 The operation has completed successfully. 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60093 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:12:49.826 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:12:49.827 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:12:49.827 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:12:49.827 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:49.827 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:49.827 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:12:49.827 12:14:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:49.827 12:14:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:50.085 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.085 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:12:50.085 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:12:50.085 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.085 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.085 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.344 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.344 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.344 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.344 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.344 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.344 12:14:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.910 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.910 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.910 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:50.910 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:12:50.910 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:50.911 12:15:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:51.474 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:51.474 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:12:51.474 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:12:51.474 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:51.474 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:51.474 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:51.474 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:51.474 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:51.474 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:51.474 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:51.474 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:51.474 12:15:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:52.040 12:15:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:52.040 12:15:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:52.298 12:15:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:52.298 12:15:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:52.298 12:15:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:12:52.298 12:15:01 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:12:52.298 12:15:01 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:52.298 12:15:01 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:52.298 12:15:01 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:12:52.298 12:15:01 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:52.298 12:15:01 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:12:52.298 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:52.298 12:15:01 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:52.298 12:15:01 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:12:52.298 00:12:52.298 real 0m5.575s 00:12:52.298 user 0m1.024s 00:12:52.298 sys 0m1.473s 00:12:52.298 12:15:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:52.298 12:15:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:12:52.298 ************************************ 00:12:52.298 END TEST dm_mount 00:12:52.298 ************************************ 00:12:52.298 12:15:01 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:12:52.298 12:15:01 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:12:52.298 12:15:01 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:12:52.298 12:15:01 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:52.298 12:15:01 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:52.298 12:15:01 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:52.298 12:15:01 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:52.298 12:15:01 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:52.555 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:52.555 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:52.555 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:52.555 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:52.555 12:15:01 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:12:52.555 12:15:01 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:52.555 12:15:01 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:52.555 12:15:01 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:52.555 12:15:01 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:52.555 12:15:01 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:12:52.555 12:15:01 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:12:52.555 ************************************ 00:12:52.555 END TEST devices 00:12:52.555 ************************************ 00:12:52.555 00:12:52.555 real 0m14.503s 00:12:52.555 user 0m3.694s 00:12:52.555 sys 0m5.161s 00:12:52.555 12:15:01 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:52.555 12:15:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:52.812 12:15:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:12:52.812 00:12:52.812 real 0m52.117s 00:12:52.812 user 0m12.389s 00:12:52.812 sys 0m19.781s 00:12:52.812 12:15:02 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:52.812 12:15:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:52.812 ************************************ 00:12:52.812 END TEST setup.sh 00:12:52.812 ************************************ 00:12:52.812 12:15:02 -- common/autotest_common.sh@1142 -- # return 0 00:12:52.812 12:15:02 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:53.375 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:53.998 Hugepages 00:12:53.998 node hugesize free / total 00:12:53.998 node0 1048576kB 0 / 0 00:12:53.998 node0 2048kB 2048 / 2048 00:12:53.998 00:12:53.998 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:53.998 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:12:54.255 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:12:54.255 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:12:54.255 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:12:54.513 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:12:54.513 12:15:03 -- spdk/autotest.sh@130 -- # uname -s 00:12:54.513 12:15:03 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:12:54.513 12:15:03 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:12:54.513 12:15:03 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:55.078 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:56.010 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:56.010 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:56.010 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:56.010 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:56.010 12:15:05 -- common/autotest_common.sh@1532 -- # sleep 1 00:12:56.944 12:15:06 -- common/autotest_common.sh@1533 -- # bdfs=() 00:12:56.944 12:15:06 -- common/autotest_common.sh@1533 -- # local bdfs 00:12:56.944 12:15:06 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:12:56.944 12:15:06 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:12:56.944 12:15:06 -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:56.944 12:15:06 -- common/autotest_common.sh@1513 -- # local bdfs 00:12:56.944 12:15:06 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:56.944 12:15:06 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:56.944 12:15:06 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:57.203 12:15:06 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:57.203 12:15:06 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:57.203 12:15:06 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:57.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:57.720 Waiting for block devices as requested 00:12:57.979 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:57.979 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:58.238 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:58.238 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:03.611 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:03.611 12:15:12 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:13:03.611 12:15:12 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:13:03.611 12:15:12 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:13:03.611 12:15:12 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:13:03.611 12:15:12 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:13:03.611 12:15:12 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:13:03.611 12:15:12 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:13:03.611 12:15:12 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:13:03.611 12:15:12 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:13:03.611 12:15:12 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:13:03.611 12:15:12 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:13:03.611 12:15:12 -- common/autotest_common.sh@1545 -- # grep oacs 00:13:03.611 12:15:12 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:13:03.611 12:15:12 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:13:03.611 12:15:12 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:13:03.611 12:15:12 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:13:03.611 12:15:12 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:13:03.611 12:15:12 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:13:03.611 12:15:12 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:13:03.611 12:15:12 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:13:03.611 12:15:12 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:13:03.611 12:15:12 -- common/autotest_common.sh@1557 -- # continue 00:13:03.611 12:15:12 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:13:03.611 12:15:12 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:13:03.611 12:15:12 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:13:03.611 12:15:12 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:13:03.611 12:15:12 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:13:03.611 12:15:12 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:13:03.611 12:15:12 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:13:03.611 12:15:12 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:13:03.611 12:15:12 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:13:03.611 12:15:12 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:13:03.611 12:15:12 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:13:03.611 12:15:12 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:13:03.611 12:15:12 -- common/autotest_common.sh@1545 -- # grep oacs 00:13:03.611 12:15:12 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:13:03.611 12:15:12 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:13:03.611 12:15:12 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:13:03.611 12:15:12 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:13:03.611 12:15:12 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:13:03.611 12:15:12 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:13:03.611 12:15:12 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:13:03.611 12:15:12 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:13:03.611 12:15:12 -- common/autotest_common.sh@1557 -- # continue 00:13:03.611 12:15:12 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:13:03.612 12:15:12 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:13:03.612 12:15:12 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:13:03.612 12:15:12 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:13:03.612 12:15:12 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:13:03.612 12:15:12 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:13:03.612 12:15:12 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:13:03.612 12:15:12 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:13:03.612 12:15:12 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:13:03.612 12:15:12 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:13:03.612 12:15:12 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:13:03.612 12:15:12 -- common/autotest_common.sh@1545 -- # grep oacs 00:13:03.612 12:15:12 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:13:03.612 12:15:12 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:13:03.612 12:15:12 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:13:03.612 12:15:12 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:13:03.612 12:15:12 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:13:03.612 12:15:12 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:13:03.612 12:15:12 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:13:03.612 12:15:12 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:13:03.612 12:15:12 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:13:03.612 12:15:12 -- common/autotest_common.sh@1557 -- # continue 00:13:03.612 12:15:12 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:13:03.612 12:15:12 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:13:03.612 12:15:12 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:13:03.612 12:15:12 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:13:03.612 12:15:12 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:13:03.612 12:15:12 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:13:03.612 12:15:12 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:13:03.612 12:15:12 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:13:03.612 12:15:12 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:13:03.612 12:15:12 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:13:03.612 12:15:12 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:13:03.612 12:15:12 -- common/autotest_common.sh@1545 -- # grep oacs 00:13:03.612 12:15:12 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:13:03.612 12:15:12 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:13:03.612 12:15:12 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:13:03.612 12:15:12 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:13:03.612 12:15:12 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:13:03.612 12:15:12 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:13:03.612 12:15:12 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:13:03.612 12:15:12 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:13:03.612 12:15:12 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:13:03.612 12:15:12 -- common/autotest_common.sh@1557 -- # continue 00:13:03.612 12:15:12 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:13:03.612 12:15:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:03.612 12:15:12 -- common/autotest_common.sh@10 -- # set +x 00:13:03.612 12:15:12 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:13:03.612 12:15:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:03.612 12:15:12 -- common/autotest_common.sh@10 -- # set +x 00:13:03.612 12:15:12 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:04.179 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:05.114 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:05.114 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:05.114 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:05.114 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:05.114 12:15:14 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:13:05.114 12:15:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:05.114 12:15:14 -- common/autotest_common.sh@10 -- # set +x 00:13:05.114 12:15:14 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:13:05.114 12:15:14 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:13:05.114 12:15:14 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:13:05.114 12:15:14 -- common/autotest_common.sh@1577 -- # bdfs=() 00:13:05.114 12:15:14 -- common/autotest_common.sh@1577 -- # local bdfs 00:13:05.114 12:15:14 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:13:05.114 12:15:14 -- common/autotest_common.sh@1513 -- # bdfs=() 00:13:05.114 12:15:14 -- common/autotest_common.sh@1513 -- # local bdfs 00:13:05.114 12:15:14 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:05.114 12:15:14 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:05.114 12:15:14 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:13:05.114 12:15:14 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:13:05.114 12:15:14 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:05.114 12:15:14 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:13:05.114 12:15:14 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:13:05.114 12:15:14 -- common/autotest_common.sh@1580 -- # device=0x0010 00:13:05.114 12:15:14 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:13:05.114 12:15:14 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:13:05.114 12:15:14 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:13:05.114 12:15:14 -- common/autotest_common.sh@1580 -- # device=0x0010 00:13:05.114 12:15:14 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:13:05.114 12:15:14 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:13:05.114 12:15:14 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:13:05.114 12:15:14 -- common/autotest_common.sh@1580 -- # device=0x0010 00:13:05.114 12:15:14 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:13:05.114 12:15:14 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:13:05.114 12:15:14 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:13:05.114 12:15:14 -- common/autotest_common.sh@1580 -- # device=0x0010 00:13:05.114 12:15:14 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:13:05.114 12:15:14 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:13:05.114 12:15:14 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:13:05.114 12:15:14 -- common/autotest_common.sh@1593 -- # return 0 00:13:05.114 12:15:14 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:13:05.114 12:15:14 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:13:05.114 12:15:14 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:13:05.114 12:15:14 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:13:05.114 12:15:14 -- spdk/autotest.sh@162 -- # timing_enter lib 00:13:05.114 12:15:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:05.114 12:15:14 -- common/autotest_common.sh@10 -- # set +x 00:13:05.373 12:15:14 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:13:05.373 12:15:14 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:13:05.373 12:15:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:05.373 12:15:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.373 12:15:14 -- common/autotest_common.sh@10 -- # set +x 00:13:05.373 ************************************ 00:13:05.373 START TEST env 00:13:05.373 ************************************ 00:13:05.373 12:15:14 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:13:05.373 * Looking for test storage... 00:13:05.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:13:05.373 12:15:14 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:13:05.373 12:15:14 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:05.373 12:15:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.373 12:15:14 env -- common/autotest_common.sh@10 -- # set +x 00:13:05.373 ************************************ 00:13:05.373 START TEST env_memory 00:13:05.373 ************************************ 00:13:05.373 12:15:14 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:13:05.373 00:13:05.373 00:13:05.373 CUnit - A unit testing framework for C - Version 2.1-3 00:13:05.373 http://cunit.sourceforge.net/ 00:13:05.373 00:13:05.373 00:13:05.373 Suite: memory 00:13:05.373 Test: alloc and free memory map ...[2024-07-10 12:15:14.790155] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:13:05.373 passed 00:13:05.373 Test: mem map translation ...[2024-07-10 12:15:14.831998] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:13:05.373 [2024-07-10 12:15:14.832079] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:13:05.373 [2024-07-10 12:15:14.832147] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:13:05.373 [2024-07-10 12:15:14.832173] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:13:05.633 passed 00:13:05.633 Test: mem map registration ...[2024-07-10 12:15:14.897466] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:13:05.633 [2024-07-10 12:15:14.897559] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:13:05.633 passed 00:13:05.633 Test: mem map adjacent registrations ...passed 00:13:05.633 00:13:05.633 Run Summary: Type Total Ran Passed Failed Inactive 00:13:05.633 suites 1 1 n/a 0 0 00:13:05.633 tests 4 4 4 0 0 00:13:05.633 asserts 152 152 152 0 n/a 00:13:05.633 00:13:05.633 Elapsed time = 0.237 seconds 00:13:05.633 00:13:05.633 real 0m0.302s 00:13:05.633 user 0m0.254s 00:13:05.633 sys 0m0.035s 00:13:05.633 ************************************ 00:13:05.633 END TEST env_memory 00:13:05.633 ************************************ 00:13:05.633 12:15:15 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:05.633 12:15:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:13:05.633 12:15:15 env -- common/autotest_common.sh@1142 -- # return 0 00:13:05.633 12:15:15 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:13:05.633 12:15:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:05.633 12:15:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.633 12:15:15 env -- common/autotest_common.sh@10 -- # set +x 00:13:05.633 ************************************ 00:13:05.633 START TEST env_vtophys 00:13:05.633 ************************************ 00:13:05.633 12:15:15 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:13:05.892 EAL: lib.eal log level changed from notice to debug 00:13:05.892 EAL: Detected lcore 0 as core 0 on socket 0 00:13:05.892 EAL: Detected lcore 1 as core 0 on socket 0 00:13:05.892 EAL: Detected lcore 2 as core 0 on socket 0 00:13:05.892 EAL: Detected lcore 3 as core 0 on socket 0 00:13:05.892 EAL: Detected lcore 4 as core 0 on socket 0 00:13:05.892 EAL: Detected lcore 5 as core 0 on socket 0 00:13:05.892 EAL: Detected lcore 6 as core 0 on socket 0 00:13:05.892 EAL: Detected lcore 7 as core 0 on socket 0 00:13:05.892 EAL: Detected lcore 8 as core 0 on socket 0 00:13:05.892 EAL: Detected lcore 9 as core 0 on socket 0 00:13:05.892 EAL: Maximum logical cores by configuration: 128 00:13:05.892 EAL: Detected CPU lcores: 10 00:13:05.892 EAL: Detected NUMA nodes: 1 00:13:05.892 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:13:05.892 EAL: Detected shared linkage of DPDK 00:13:05.892 EAL: No shared files mode enabled, IPC will be disabled 00:13:05.892 EAL: Selected IOVA mode 'PA' 00:13:05.892 EAL: Probing VFIO support... 00:13:05.892 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:13:05.892 EAL: VFIO modules not loaded, skipping VFIO support... 00:13:05.892 EAL: Ask a virtual area of 0x2e000 bytes 00:13:05.892 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:13:05.892 EAL: Setting up physically contiguous memory... 00:13:05.892 EAL: Setting maximum number of open files to 524288 00:13:05.892 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:13:05.892 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:13:05.892 EAL: Ask a virtual area of 0x61000 bytes 00:13:05.892 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:13:05.892 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:05.892 EAL: Ask a virtual area of 0x400000000 bytes 00:13:05.892 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:13:05.892 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:13:05.892 EAL: Ask a virtual area of 0x61000 bytes 00:13:05.892 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:13:05.892 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:05.892 EAL: Ask a virtual area of 0x400000000 bytes 00:13:05.892 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:13:05.892 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:13:05.892 EAL: Ask a virtual area of 0x61000 bytes 00:13:05.892 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:13:05.892 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:05.892 EAL: Ask a virtual area of 0x400000000 bytes 00:13:05.892 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:13:05.892 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:13:05.892 EAL: Ask a virtual area of 0x61000 bytes 00:13:05.892 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:13:05.892 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:05.892 EAL: Ask a virtual area of 0x400000000 bytes 00:13:05.892 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:13:05.892 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:13:05.892 EAL: Hugepages will be freed exactly as allocated. 00:13:05.892 EAL: No shared files mode enabled, IPC is disabled 00:13:05.892 EAL: No shared files mode enabled, IPC is disabled 00:13:05.892 EAL: TSC frequency is ~2490000 KHz 00:13:05.892 EAL: Main lcore 0 is ready (tid=7f5d36846a40;cpuset=[0]) 00:13:05.892 EAL: Trying to obtain current memory policy. 00:13:05.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:05.892 EAL: Restoring previous memory policy: 0 00:13:05.892 EAL: request: mp_malloc_sync 00:13:05.892 EAL: No shared files mode enabled, IPC is disabled 00:13:05.892 EAL: Heap on socket 0 was expanded by 2MB 00:13:05.892 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:13:05.892 EAL: No PCI address specified using 'addr=' in: bus=pci 00:13:05.892 EAL: Mem event callback 'spdk:(nil)' registered 00:13:05.892 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:13:05.892 00:13:05.892 00:13:05.892 CUnit - A unit testing framework for C - Version 2.1-3 00:13:05.892 http://cunit.sourceforge.net/ 00:13:05.892 00:13:05.892 00:13:05.892 Suite: components_suite 00:13:06.459 Test: vtophys_malloc_test ...passed 00:13:06.459 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:13:06.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:06.459 EAL: Restoring previous memory policy: 4 00:13:06.459 EAL: Calling mem event callback 'spdk:(nil)' 00:13:06.459 EAL: request: mp_malloc_sync 00:13:06.459 EAL: No shared files mode enabled, IPC is disabled 00:13:06.459 EAL: Heap on socket 0 was expanded by 4MB 00:13:06.459 EAL: Calling mem event callback 'spdk:(nil)' 00:13:06.459 EAL: request: mp_malloc_sync 00:13:06.459 EAL: No shared files mode enabled, IPC is disabled 00:13:06.459 EAL: Heap on socket 0 was shrunk by 4MB 00:13:06.459 EAL: Trying to obtain current memory policy. 00:13:06.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:06.459 EAL: Restoring previous memory policy: 4 00:13:06.459 EAL: Calling mem event callback 'spdk:(nil)' 00:13:06.459 EAL: request: mp_malloc_sync 00:13:06.459 EAL: No shared files mode enabled, IPC is disabled 00:13:06.459 EAL: Heap on socket 0 was expanded by 6MB 00:13:06.459 EAL: Calling mem event callback 'spdk:(nil)' 00:13:06.459 EAL: request: mp_malloc_sync 00:13:06.459 EAL: No shared files mode enabled, IPC is disabled 00:13:06.459 EAL: Heap on socket 0 was shrunk by 6MB 00:13:06.459 EAL: Trying to obtain current memory policy. 00:13:06.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:06.459 EAL: Restoring previous memory policy: 4 00:13:06.459 EAL: Calling mem event callback 'spdk:(nil)' 00:13:06.459 EAL: request: mp_malloc_sync 00:13:06.459 EAL: No shared files mode enabled, IPC is disabled 00:13:06.459 EAL: Heap on socket 0 was expanded by 10MB 00:13:06.459 EAL: Calling mem event callback 'spdk:(nil)' 00:13:06.459 EAL: request: mp_malloc_sync 00:13:06.459 EAL: No shared files mode enabled, IPC is disabled 00:13:06.459 EAL: Heap on socket 0 was shrunk by 10MB 00:13:06.459 EAL: Trying to obtain current memory policy. 00:13:06.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:06.459 EAL: Restoring previous memory policy: 4 00:13:06.459 EAL: Calling mem event callback 'spdk:(nil)' 00:13:06.459 EAL: request: mp_malloc_sync 00:13:06.459 EAL: No shared files mode enabled, IPC is disabled 00:13:06.459 EAL: Heap on socket 0 was expanded by 18MB 00:13:06.459 EAL: Calling mem event callback 'spdk:(nil)' 00:13:06.459 EAL: request: mp_malloc_sync 00:13:06.459 EAL: No shared files mode enabled, IPC is disabled 00:13:06.459 EAL: Heap on socket 0 was shrunk by 18MB 00:13:06.719 EAL: Trying to obtain current memory policy. 00:13:06.719 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:06.719 EAL: Restoring previous memory policy: 4 00:13:06.719 EAL: Calling mem event callback 'spdk:(nil)' 00:13:06.719 EAL: request: mp_malloc_sync 00:13:06.719 EAL: No shared files mode enabled, IPC is disabled 00:13:06.719 EAL: Heap on socket 0 was expanded by 34MB 00:13:06.719 EAL: Calling mem event callback 'spdk:(nil)' 00:13:06.719 EAL: request: mp_malloc_sync 00:13:06.719 EAL: No shared files mode enabled, IPC is disabled 00:13:06.719 EAL: Heap on socket 0 was shrunk by 34MB 00:13:06.719 EAL: Trying to obtain current memory policy. 00:13:06.719 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:06.719 EAL: Restoring previous memory policy: 4 00:13:06.719 EAL: Calling mem event callback 'spdk:(nil)' 00:13:06.719 EAL: request: mp_malloc_sync 00:13:06.719 EAL: No shared files mode enabled, IPC is disabled 00:13:06.719 EAL: Heap on socket 0 was expanded by 66MB 00:13:06.978 EAL: Calling mem event callback 'spdk:(nil)' 00:13:06.978 EAL: request: mp_malloc_sync 00:13:06.978 EAL: No shared files mode enabled, IPC is disabled 00:13:06.978 EAL: Heap on socket 0 was shrunk by 66MB 00:13:06.978 EAL: Trying to obtain current memory policy. 00:13:06.978 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:07.237 EAL: Restoring previous memory policy: 4 00:13:07.237 EAL: Calling mem event callback 'spdk:(nil)' 00:13:07.237 EAL: request: mp_malloc_sync 00:13:07.237 EAL: No shared files mode enabled, IPC is disabled 00:13:07.237 EAL: Heap on socket 0 was expanded by 130MB 00:13:07.496 EAL: Calling mem event callback 'spdk:(nil)' 00:13:07.496 EAL: request: mp_malloc_sync 00:13:07.496 EAL: No shared files mode enabled, IPC is disabled 00:13:07.496 EAL: Heap on socket 0 was shrunk by 130MB 00:13:07.755 EAL: Trying to obtain current memory policy. 00:13:07.755 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:08.014 EAL: Restoring previous memory policy: 4 00:13:08.014 EAL: Calling mem event callback 'spdk:(nil)' 00:13:08.014 EAL: request: mp_malloc_sync 00:13:08.014 EAL: No shared files mode enabled, IPC is disabled 00:13:08.014 EAL: Heap on socket 0 was expanded by 258MB 00:13:08.580 EAL: Calling mem event callback 'spdk:(nil)' 00:13:08.838 EAL: request: mp_malloc_sync 00:13:08.838 EAL: No shared files mode enabled, IPC is disabled 00:13:08.838 EAL: Heap on socket 0 was shrunk by 258MB 00:13:09.095 EAL: Trying to obtain current memory policy. 00:13:09.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:09.389 EAL: Restoring previous memory policy: 4 00:13:09.389 EAL: Calling mem event callback 'spdk:(nil)' 00:13:09.389 EAL: request: mp_malloc_sync 00:13:09.389 EAL: No shared files mode enabled, IPC is disabled 00:13:09.389 EAL: Heap on socket 0 was expanded by 514MB 00:13:10.762 EAL: Calling mem event callback 'spdk:(nil)' 00:13:11.020 EAL: request: mp_malloc_sync 00:13:11.020 EAL: No shared files mode enabled, IPC is disabled 00:13:11.020 EAL: Heap on socket 0 was shrunk by 514MB 00:13:11.963 EAL: Trying to obtain current memory policy. 00:13:11.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:12.222 EAL: Restoring previous memory policy: 4 00:13:12.222 EAL: Calling mem event callback 'spdk:(nil)' 00:13:12.222 EAL: request: mp_malloc_sync 00:13:12.222 EAL: No shared files mode enabled, IPC is disabled 00:13:12.222 EAL: Heap on socket 0 was expanded by 1026MB 00:13:14.782 EAL: Calling mem event callback 'spdk:(nil)' 00:13:15.041 EAL: request: mp_malloc_sync 00:13:15.041 EAL: No shared files mode enabled, IPC is disabled 00:13:15.041 EAL: Heap on socket 0 was shrunk by 1026MB 00:13:16.969 passed 00:13:16.969 00:13:16.969 Run Summary: Type Total Ran Passed Failed Inactive 00:13:16.969 suites 1 1 n/a 0 0 00:13:16.969 tests 2 2 2 0 0 00:13:16.969 asserts 5334 5334 5334 0 n/a 00:13:16.969 00:13:16.969 Elapsed time = 10.606 seconds 00:13:16.969 EAL: Calling mem event callback 'spdk:(nil)' 00:13:16.969 EAL: request: mp_malloc_sync 00:13:16.969 EAL: No shared files mode enabled, IPC is disabled 00:13:16.969 EAL: Heap on socket 0 was shrunk by 2MB 00:13:16.969 EAL: No shared files mode enabled, IPC is disabled 00:13:16.969 EAL: No shared files mode enabled, IPC is disabled 00:13:16.969 EAL: No shared files mode enabled, IPC is disabled 00:13:16.969 00:13:16.969 real 0m10.949s 00:13:16.969 user 0m9.244s 00:13:16.969 sys 0m1.535s 00:13:16.969 12:15:26 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:16.969 ************************************ 00:13:16.969 END TEST env_vtophys 00:13:16.969 ************************************ 00:13:16.969 12:15:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:13:16.969 12:15:26 env -- common/autotest_common.sh@1142 -- # return 0 00:13:16.969 12:15:26 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:13:16.969 12:15:26 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:16.969 12:15:26 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.969 12:15:26 env -- common/autotest_common.sh@10 -- # set +x 00:13:16.969 ************************************ 00:13:16.969 START TEST env_pci 00:13:16.969 ************************************ 00:13:16.969 12:15:26 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:13:16.969 00:13:16.969 00:13:16.969 CUnit - A unit testing framework for C - Version 2.1-3 00:13:16.969 http://cunit.sourceforge.net/ 00:13:16.969 00:13:16.969 00:13:16.969 Suite: pci 00:13:16.969 Test: pci_hook ...[2024-07-10 12:15:26.144125] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61983 has claimed it 00:13:16.969 passed 00:13:16.969 00:13:16.969 Run Summary: Type Total Ran Passed Failed Inactive 00:13:16.969 suites 1 1 n/a 0 0 00:13:16.969 tests 1 1 1 0 0 00:13:16.969 asserts 25 25 25 0 n/a 00:13:16.969 00:13:16.969 Elapsed time = 0.008 seconds 00:13:16.969 EAL: Cannot find device (10000:00:01.0) 00:13:16.969 EAL: Failed to attach device on primary process 00:13:16.969 00:13:16.969 real 0m0.106s 00:13:16.969 user 0m0.050s 00:13:16.969 sys 0m0.055s 00:13:16.969 ************************************ 00:13:16.969 END TEST env_pci 00:13:16.969 ************************************ 00:13:16.969 12:15:26 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:16.969 12:15:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:13:16.969 12:15:26 env -- common/autotest_common.sh@1142 -- # return 0 00:13:16.969 12:15:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:13:16.969 12:15:26 env -- env/env.sh@15 -- # uname 00:13:16.969 12:15:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:13:16.969 12:15:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:13:16.969 12:15:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:13:16.969 12:15:26 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:16.969 12:15:26 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.969 12:15:26 env -- common/autotest_common.sh@10 -- # set +x 00:13:16.970 ************************************ 00:13:16.970 START TEST env_dpdk_post_init 00:13:16.970 ************************************ 00:13:16.970 12:15:26 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:13:16.970 EAL: Detected CPU lcores: 10 00:13:16.970 EAL: Detected NUMA nodes: 1 00:13:16.970 EAL: Detected shared linkage of DPDK 00:13:16.970 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:13:16.970 EAL: Selected IOVA mode 'PA' 00:13:17.228 TELEMETRY: No legacy callbacks, legacy socket not created 00:13:17.228 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:13:17.228 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:13:17.228 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:13:17.228 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:13:17.228 Starting DPDK initialization... 00:13:17.228 Starting SPDK post initialization... 00:13:17.228 SPDK NVMe probe 00:13:17.228 Attaching to 0000:00:10.0 00:13:17.228 Attaching to 0000:00:11.0 00:13:17.228 Attaching to 0000:00:12.0 00:13:17.228 Attaching to 0000:00:13.0 00:13:17.228 Attached to 0000:00:10.0 00:13:17.228 Attached to 0000:00:11.0 00:13:17.228 Attached to 0000:00:13.0 00:13:17.228 Attached to 0000:00:12.0 00:13:17.228 Cleaning up... 00:13:17.228 00:13:17.228 real 0m0.323s 00:13:17.228 user 0m0.106s 00:13:17.228 sys 0m0.117s 00:13:17.228 12:15:26 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.228 12:15:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:13:17.228 ************************************ 00:13:17.228 END TEST env_dpdk_post_init 00:13:17.228 ************************************ 00:13:17.228 12:15:26 env -- common/autotest_common.sh@1142 -- # return 0 00:13:17.228 12:15:26 env -- env/env.sh@26 -- # uname 00:13:17.228 12:15:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:13:17.228 12:15:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:13:17.228 12:15:26 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:17.228 12:15:26 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.228 12:15:26 env -- common/autotest_common.sh@10 -- # set +x 00:13:17.228 ************************************ 00:13:17.228 START TEST env_mem_callbacks 00:13:17.228 ************************************ 00:13:17.228 12:15:26 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:13:17.487 EAL: Detected CPU lcores: 10 00:13:17.487 EAL: Detected NUMA nodes: 1 00:13:17.487 EAL: Detected shared linkage of DPDK 00:13:17.487 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:13:17.487 EAL: Selected IOVA mode 'PA' 00:13:17.487 TELEMETRY: No legacy callbacks, legacy socket not created 00:13:17.487 00:13:17.487 00:13:17.487 CUnit - A unit testing framework for C - Version 2.1-3 00:13:17.487 http://cunit.sourceforge.net/ 00:13:17.487 00:13:17.487 00:13:17.487 Suite: memory 00:13:17.487 Test: test ... 00:13:17.487 register 0x200000200000 2097152 00:13:17.487 malloc 3145728 00:13:17.487 register 0x200000400000 4194304 00:13:17.487 buf 0x2000004fffc0 len 3145728 PASSED 00:13:17.487 malloc 64 00:13:17.487 buf 0x2000004ffec0 len 64 PASSED 00:13:17.487 malloc 4194304 00:13:17.487 register 0x200000800000 6291456 00:13:17.487 buf 0x2000009fffc0 len 4194304 PASSED 00:13:17.487 free 0x2000004fffc0 3145728 00:13:17.487 free 0x2000004ffec0 64 00:13:17.487 unregister 0x200000400000 4194304 PASSED 00:13:17.487 free 0x2000009fffc0 4194304 00:13:17.487 unregister 0x200000800000 6291456 PASSED 00:13:17.487 malloc 8388608 00:13:17.487 register 0x200000400000 10485760 00:13:17.487 buf 0x2000005fffc0 len 8388608 PASSED 00:13:17.487 free 0x2000005fffc0 8388608 00:13:17.487 unregister 0x200000400000 10485760 PASSED 00:13:17.487 passed 00:13:17.487 00:13:17.487 Run Summary: Type Total Ran Passed Failed Inactive 00:13:17.487 suites 1 1 n/a 0 0 00:13:17.487 tests 1 1 1 0 0 00:13:17.487 asserts 15 15 15 0 n/a 00:13:17.487 00:13:17.487 Elapsed time = 0.086 seconds 00:13:17.745 00:13:17.745 real 0m0.293s 00:13:17.745 user 0m0.115s 00:13:17.745 sys 0m0.076s 00:13:17.745 12:15:26 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.745 12:15:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:13:17.745 ************************************ 00:13:17.745 END TEST env_mem_callbacks 00:13:17.745 ************************************ 00:13:17.745 12:15:27 env -- common/autotest_common.sh@1142 -- # return 0 00:13:17.745 ************************************ 00:13:17.745 END TEST env 00:13:17.745 ************************************ 00:13:17.745 00:13:17.745 real 0m12.426s 00:13:17.745 user 0m9.936s 00:13:17.745 sys 0m2.081s 00:13:17.745 12:15:27 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.745 12:15:27 env -- common/autotest_common.sh@10 -- # set +x 00:13:17.745 12:15:27 -- common/autotest_common.sh@1142 -- # return 0 00:13:17.745 12:15:27 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:13:17.745 12:15:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:17.745 12:15:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.745 12:15:27 -- common/autotest_common.sh@10 -- # set +x 00:13:17.745 ************************************ 00:13:17.745 START TEST rpc 00:13:17.745 ************************************ 00:13:17.745 12:15:27 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:13:18.003 * Looking for test storage... 00:13:18.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:13:18.003 12:15:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62102 00:13:18.003 12:15:27 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:13:18.003 12:15:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:18.003 12:15:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62102 00:13:18.003 12:15:27 rpc -- common/autotest_common.sh@829 -- # '[' -z 62102 ']' 00:13:18.003 12:15:27 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.003 12:15:27 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.003 12:15:27 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.003 12:15:27 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.003 12:15:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.003 [2024-07-10 12:15:27.365275] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:13:18.003 [2024-07-10 12:15:27.365431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62102 ] 00:13:18.262 [2024-07-10 12:15:27.539750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.520 [2024-07-10 12:15:27.793063] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:13:18.520 [2024-07-10 12:15:27.793127] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62102' to capture a snapshot of events at runtime. 00:13:18.520 [2024-07-10 12:15:27.793144] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.520 [2024-07-10 12:15:27.793173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.520 [2024-07-10 12:15:27.793187] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62102 for offline analysis/debug. 00:13:18.520 [2024-07-10 12:15:27.793228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.455 12:15:28 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.455 12:15:28 rpc -- common/autotest_common.sh@862 -- # return 0 00:13:19.455 12:15:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:13:19.455 12:15:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:13:19.455 12:15:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:13:19.455 12:15:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:13:19.455 12:15:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:19.455 12:15:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.455 12:15:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.455 ************************************ 00:13:19.455 START TEST rpc_integrity 00:13:19.455 ************************************ 00:13:19.455 12:15:28 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:13:19.455 12:15:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:19.455 12:15:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.455 12:15:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:19.716 12:15:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.716 12:15:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:13:19.716 12:15:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:13:19.717 12:15:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:13:19.717 12:15:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:13:19.717 12:15:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.717 12:15:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:19.717 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.717 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:13:19.717 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:13:19.717 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.717 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:19.717 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.717 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:13:19.717 { 00:13:19.717 "name": "Malloc0", 00:13:19.717 "aliases": [ 00:13:19.717 "236c9014-53d8-4baf-85bc-d24030daf622" 00:13:19.717 ], 00:13:19.717 "product_name": "Malloc disk", 00:13:19.717 "block_size": 512, 00:13:19.717 "num_blocks": 16384, 00:13:19.717 "uuid": "236c9014-53d8-4baf-85bc-d24030daf622", 00:13:19.717 "assigned_rate_limits": { 00:13:19.717 "rw_ios_per_sec": 0, 00:13:19.717 "rw_mbytes_per_sec": 0, 00:13:19.717 "r_mbytes_per_sec": 0, 00:13:19.717 "w_mbytes_per_sec": 0 00:13:19.717 }, 00:13:19.717 "claimed": false, 00:13:19.717 "zoned": false, 00:13:19.717 "supported_io_types": { 00:13:19.717 "read": true, 00:13:19.717 "write": true, 00:13:19.717 "unmap": true, 00:13:19.717 "flush": true, 00:13:19.717 "reset": true, 00:13:19.717 "nvme_admin": false, 00:13:19.717 "nvme_io": false, 00:13:19.717 "nvme_io_md": false, 00:13:19.717 "write_zeroes": true, 00:13:19.717 "zcopy": true, 00:13:19.717 "get_zone_info": false, 00:13:19.717 "zone_management": false, 00:13:19.717 "zone_append": false, 00:13:19.717 "compare": false, 00:13:19.717 "compare_and_write": false, 00:13:19.717 "abort": true, 00:13:19.717 "seek_hole": false, 00:13:19.717 "seek_data": false, 00:13:19.717 "copy": true, 00:13:19.717 "nvme_iov_md": false 00:13:19.717 }, 00:13:19.717 "memory_domains": [ 00:13:19.717 { 00:13:19.717 "dma_device_id": "system", 00:13:19.717 "dma_device_type": 1 00:13:19.717 }, 00:13:19.717 { 00:13:19.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.717 "dma_device_type": 2 00:13:19.717 } 00:13:19.717 ], 00:13:19.717 "driver_specific": {} 00:13:19.717 } 00:13:19.717 ]' 00:13:19.717 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:13:19.717 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:13:19.717 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:13:19.717 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.717 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:19.717 [2024-07-10 12:15:29.095118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:13:19.717 [2024-07-10 12:15:29.095223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.717 [2024-07-10 12:15:29.095276] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:19.717 [2024-07-10 12:15:29.095289] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.717 [2024-07-10 12:15:29.098013] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.717 [2024-07-10 12:15:29.098056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:13:19.717 Passthru0 00:13:19.717 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.717 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:13:19.717 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.717 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:19.717 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.717 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:13:19.717 { 00:13:19.717 "name": "Malloc0", 00:13:19.717 "aliases": [ 00:13:19.717 "236c9014-53d8-4baf-85bc-d24030daf622" 00:13:19.717 ], 00:13:19.717 "product_name": "Malloc disk", 00:13:19.717 "block_size": 512, 00:13:19.717 "num_blocks": 16384, 00:13:19.717 "uuid": "236c9014-53d8-4baf-85bc-d24030daf622", 00:13:19.717 "assigned_rate_limits": { 00:13:19.717 "rw_ios_per_sec": 0, 00:13:19.717 "rw_mbytes_per_sec": 0, 00:13:19.717 "r_mbytes_per_sec": 0, 00:13:19.717 "w_mbytes_per_sec": 0 00:13:19.717 }, 00:13:19.717 "claimed": true, 00:13:19.717 "claim_type": "exclusive_write", 00:13:19.717 "zoned": false, 00:13:19.717 "supported_io_types": { 00:13:19.717 "read": true, 00:13:19.717 "write": true, 00:13:19.717 "unmap": true, 00:13:19.717 "flush": true, 00:13:19.717 "reset": true, 00:13:19.717 "nvme_admin": false, 00:13:19.717 "nvme_io": false, 00:13:19.717 "nvme_io_md": false, 00:13:19.717 "write_zeroes": true, 00:13:19.717 "zcopy": true, 00:13:19.717 "get_zone_info": false, 00:13:19.717 "zone_management": false, 00:13:19.717 "zone_append": false, 00:13:19.717 "compare": false, 00:13:19.717 "compare_and_write": false, 00:13:19.717 "abort": true, 00:13:19.717 "seek_hole": false, 00:13:19.717 "seek_data": false, 00:13:19.717 "copy": true, 00:13:19.717 "nvme_iov_md": false 00:13:19.717 }, 00:13:19.717 "memory_domains": [ 00:13:19.717 { 00:13:19.717 "dma_device_id": "system", 00:13:19.717 "dma_device_type": 1 00:13:19.717 }, 00:13:19.717 { 00:13:19.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.717 "dma_device_type": 2 00:13:19.717 } 00:13:19.717 ], 00:13:19.717 "driver_specific": {} 00:13:19.717 }, 00:13:19.717 { 00:13:19.717 "name": "Passthru0", 00:13:19.717 "aliases": [ 00:13:19.717 "31b0b118-d2d5-5712-9d9e-d1cf94d21677" 00:13:19.717 ], 00:13:19.717 "product_name": "passthru", 00:13:19.717 "block_size": 512, 00:13:19.717 "num_blocks": 16384, 00:13:19.717 "uuid": "31b0b118-d2d5-5712-9d9e-d1cf94d21677", 00:13:19.717 "assigned_rate_limits": { 00:13:19.717 "rw_ios_per_sec": 0, 00:13:19.717 "rw_mbytes_per_sec": 0, 00:13:19.717 "r_mbytes_per_sec": 0, 00:13:19.717 "w_mbytes_per_sec": 0 00:13:19.717 }, 00:13:19.717 "claimed": false, 00:13:19.717 "zoned": false, 00:13:19.717 "supported_io_types": { 00:13:19.717 "read": true, 00:13:19.717 "write": true, 00:13:19.717 "unmap": true, 00:13:19.717 "flush": true, 00:13:19.717 "reset": true, 00:13:19.717 "nvme_admin": false, 00:13:19.717 "nvme_io": false, 00:13:19.717 "nvme_io_md": false, 00:13:19.717 "write_zeroes": true, 00:13:19.717 "zcopy": true, 00:13:19.717 "get_zone_info": false, 00:13:19.717 "zone_management": false, 00:13:19.717 "zone_append": false, 00:13:19.717 "compare": false, 00:13:19.717 "compare_and_write": false, 00:13:19.717 "abort": true, 00:13:19.717 "seek_hole": false, 00:13:19.717 "seek_data": false, 00:13:19.717 "copy": true, 00:13:19.717 "nvme_iov_md": false 00:13:19.717 }, 00:13:19.717 "memory_domains": [ 00:13:19.717 { 00:13:19.717 "dma_device_id": "system", 00:13:19.717 "dma_device_type": 1 00:13:19.717 }, 00:13:19.717 { 00:13:19.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.717 "dma_device_type": 2 00:13:19.717 } 00:13:19.717 ], 00:13:19.717 "driver_specific": { 00:13:19.717 "passthru": { 00:13:19.717 "name": "Passthru0", 00:13:19.717 "base_bdev_name": "Malloc0" 00:13:19.717 } 00:13:19.717 } 00:13:19.717 } 00:13:19.717 ]' 00:13:19.717 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:13:19.717 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:13:19.717 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:13:19.717 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.717 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:19.982 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.982 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:19.982 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.982 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:19.982 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.982 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:19.982 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.982 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:19.982 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.982 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:13:19.982 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:13:19.982 ************************************ 00:13:19.982 END TEST rpc_integrity 00:13:19.982 ************************************ 00:13:19.982 12:15:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:13:19.982 00:13:19.982 real 0m0.385s 00:13:19.982 user 0m0.201s 00:13:19.982 sys 0m0.065s 00:13:19.982 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:19.982 12:15:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:19.982 12:15:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:13:19.982 12:15:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:13:19.982 12:15:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:19.982 12:15:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.982 12:15:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.982 ************************************ 00:13:19.982 START TEST rpc_plugins 00:13:19.982 ************************************ 00:13:19.982 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:13:19.982 12:15:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:13:19.982 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.982 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:19.982 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.982 12:15:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:13:19.982 12:15:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:13:19.982 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.982 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:19.982 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.982 12:15:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:13:19.982 { 00:13:19.982 "name": "Malloc1", 00:13:19.982 "aliases": [ 00:13:19.982 "ae41e681-8f9f-4380-b03f-4ecdf25e1e27" 00:13:19.982 ], 00:13:19.982 "product_name": "Malloc disk", 00:13:19.982 "block_size": 4096, 00:13:19.982 "num_blocks": 256, 00:13:19.982 "uuid": "ae41e681-8f9f-4380-b03f-4ecdf25e1e27", 00:13:19.982 "assigned_rate_limits": { 00:13:19.982 "rw_ios_per_sec": 0, 00:13:19.982 "rw_mbytes_per_sec": 0, 00:13:19.982 "r_mbytes_per_sec": 0, 00:13:19.982 "w_mbytes_per_sec": 0 00:13:19.982 }, 00:13:19.982 "claimed": false, 00:13:19.982 "zoned": false, 00:13:19.982 "supported_io_types": { 00:13:19.982 "read": true, 00:13:19.982 "write": true, 00:13:19.982 "unmap": true, 00:13:19.982 "flush": true, 00:13:19.982 "reset": true, 00:13:19.982 "nvme_admin": false, 00:13:19.982 "nvme_io": false, 00:13:19.982 "nvme_io_md": false, 00:13:19.982 "write_zeroes": true, 00:13:19.982 "zcopy": true, 00:13:19.982 "get_zone_info": false, 00:13:19.982 "zone_management": false, 00:13:19.982 "zone_append": false, 00:13:19.982 "compare": false, 00:13:19.982 "compare_and_write": false, 00:13:19.982 "abort": true, 00:13:19.982 "seek_hole": false, 00:13:19.982 "seek_data": false, 00:13:19.982 "copy": true, 00:13:19.982 "nvme_iov_md": false 00:13:19.982 }, 00:13:19.982 "memory_domains": [ 00:13:19.982 { 00:13:19.982 "dma_device_id": "system", 00:13:19.982 "dma_device_type": 1 00:13:19.982 }, 00:13:19.982 { 00:13:19.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.982 "dma_device_type": 2 00:13:19.982 } 00:13:19.982 ], 00:13:19.982 "driver_specific": {} 00:13:19.982 } 00:13:19.982 ]' 00:13:19.982 12:15:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:13:20.242 12:15:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:13:20.242 12:15:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:13:20.242 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.242 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:20.242 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.242 12:15:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:13:20.242 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.242 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:20.242 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.242 12:15:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:13:20.242 12:15:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:13:20.242 ************************************ 00:13:20.242 END TEST rpc_plugins 00:13:20.242 ************************************ 00:13:20.242 12:15:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:13:20.242 00:13:20.242 real 0m0.172s 00:13:20.242 user 0m0.100s 00:13:20.242 sys 0m0.029s 00:13:20.242 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:20.242 12:15:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:20.242 12:15:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:13:20.242 12:15:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:13:20.242 12:15:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:20.242 12:15:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.242 12:15:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.242 ************************************ 00:13:20.242 START TEST rpc_trace_cmd_test 00:13:20.242 ************************************ 00:13:20.242 12:15:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:13:20.242 12:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:13:20.242 12:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:13:20.242 12:15:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.242 12:15:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.242 12:15:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.242 12:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:13:20.242 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62102", 00:13:20.242 "tpoint_group_mask": "0x8", 00:13:20.242 "iscsi_conn": { 00:13:20.242 "mask": "0x2", 00:13:20.242 "tpoint_mask": "0x0" 00:13:20.242 }, 00:13:20.242 "scsi": { 00:13:20.242 "mask": "0x4", 00:13:20.242 "tpoint_mask": "0x0" 00:13:20.242 }, 00:13:20.242 "bdev": { 00:13:20.242 "mask": "0x8", 00:13:20.242 "tpoint_mask": "0xffffffffffffffff" 00:13:20.242 }, 00:13:20.242 "nvmf_rdma": { 00:13:20.242 "mask": "0x10", 00:13:20.242 "tpoint_mask": "0x0" 00:13:20.242 }, 00:13:20.242 "nvmf_tcp": { 00:13:20.242 "mask": "0x20", 00:13:20.242 "tpoint_mask": "0x0" 00:13:20.242 }, 00:13:20.242 "ftl": { 00:13:20.242 "mask": "0x40", 00:13:20.242 "tpoint_mask": "0x0" 00:13:20.242 }, 00:13:20.242 "blobfs": { 00:13:20.242 "mask": "0x80", 00:13:20.242 "tpoint_mask": "0x0" 00:13:20.242 }, 00:13:20.242 "dsa": { 00:13:20.242 "mask": "0x200", 00:13:20.242 "tpoint_mask": "0x0" 00:13:20.242 }, 00:13:20.242 "thread": { 00:13:20.242 "mask": "0x400", 00:13:20.242 "tpoint_mask": "0x0" 00:13:20.242 }, 00:13:20.242 "nvme_pcie": { 00:13:20.242 "mask": "0x800", 00:13:20.242 "tpoint_mask": "0x0" 00:13:20.242 }, 00:13:20.242 "iaa": { 00:13:20.242 "mask": "0x1000", 00:13:20.242 "tpoint_mask": "0x0" 00:13:20.242 }, 00:13:20.242 "nvme_tcp": { 00:13:20.242 "mask": "0x2000", 00:13:20.242 "tpoint_mask": "0x0" 00:13:20.242 }, 00:13:20.242 "bdev_nvme": { 00:13:20.242 "mask": "0x4000", 00:13:20.242 "tpoint_mask": "0x0" 00:13:20.242 }, 00:13:20.242 "sock": { 00:13:20.242 "mask": "0x8000", 00:13:20.242 "tpoint_mask": "0x0" 00:13:20.242 } 00:13:20.242 }' 00:13:20.242 12:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:13:20.242 12:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:13:20.242 12:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:13:20.500 12:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:13:20.500 12:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:13:20.500 12:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:13:20.500 12:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:13:20.500 12:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:13:20.500 12:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:13:20.500 ************************************ 00:13:20.500 END TEST rpc_trace_cmd_test 00:13:20.500 ************************************ 00:13:20.500 12:15:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:13:20.500 00:13:20.500 real 0m0.260s 00:13:20.500 user 0m0.206s 00:13:20.500 sys 0m0.045s 00:13:20.500 12:15:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:20.500 12:15:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.500 12:15:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:13:20.500 12:15:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:13:20.500 12:15:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:13:20.500 12:15:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:13:20.500 12:15:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:20.500 12:15:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.500 12:15:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.500 ************************************ 00:13:20.500 START TEST rpc_daemon_integrity 00:13:20.500 ************************************ 00:13:20.500 12:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:13:20.500 12:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:20.500 12:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.500 12:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:20.500 12:15:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.500 12:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:13:20.500 12:15:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:13:20.759 { 00:13:20.759 "name": "Malloc2", 00:13:20.759 "aliases": [ 00:13:20.759 "7004f0be-aba6-4948-b184-8953dd804375" 00:13:20.759 ], 00:13:20.759 "product_name": "Malloc disk", 00:13:20.759 "block_size": 512, 00:13:20.759 "num_blocks": 16384, 00:13:20.759 "uuid": "7004f0be-aba6-4948-b184-8953dd804375", 00:13:20.759 "assigned_rate_limits": { 00:13:20.759 "rw_ios_per_sec": 0, 00:13:20.759 "rw_mbytes_per_sec": 0, 00:13:20.759 "r_mbytes_per_sec": 0, 00:13:20.759 "w_mbytes_per_sec": 0 00:13:20.759 }, 00:13:20.759 "claimed": false, 00:13:20.759 "zoned": false, 00:13:20.759 "supported_io_types": { 00:13:20.759 "read": true, 00:13:20.759 "write": true, 00:13:20.759 "unmap": true, 00:13:20.759 "flush": true, 00:13:20.759 "reset": true, 00:13:20.759 "nvme_admin": false, 00:13:20.759 "nvme_io": false, 00:13:20.759 "nvme_io_md": false, 00:13:20.759 "write_zeroes": true, 00:13:20.759 "zcopy": true, 00:13:20.759 "get_zone_info": false, 00:13:20.759 "zone_management": false, 00:13:20.759 "zone_append": false, 00:13:20.759 "compare": false, 00:13:20.759 "compare_and_write": false, 00:13:20.759 "abort": true, 00:13:20.759 "seek_hole": false, 00:13:20.759 "seek_data": false, 00:13:20.759 "copy": true, 00:13:20.759 "nvme_iov_md": false 00:13:20.759 }, 00:13:20.759 "memory_domains": [ 00:13:20.759 { 00:13:20.759 "dma_device_id": "system", 00:13:20.759 "dma_device_type": 1 00:13:20.759 }, 00:13:20.759 { 00:13:20.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.759 "dma_device_type": 2 00:13:20.759 } 00:13:20.759 ], 00:13:20.759 "driver_specific": {} 00:13:20.759 } 00:13:20.759 ]' 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.759 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:20.759 [2024-07-10 12:15:30.137086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:13:20.759 [2024-07-10 12:15:30.137168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.759 [2024-07-10 12:15:30.137201] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:20.759 [2024-07-10 12:15:30.137214] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.759 [2024-07-10 12:15:30.139851] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.760 [2024-07-10 12:15:30.139892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:13:20.760 Passthru0 00:13:20.760 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.760 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:13:20.760 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.760 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:20.760 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.760 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:13:20.760 { 00:13:20.760 "name": "Malloc2", 00:13:20.760 "aliases": [ 00:13:20.760 "7004f0be-aba6-4948-b184-8953dd804375" 00:13:20.760 ], 00:13:20.760 "product_name": "Malloc disk", 00:13:20.760 "block_size": 512, 00:13:20.760 "num_blocks": 16384, 00:13:20.760 "uuid": "7004f0be-aba6-4948-b184-8953dd804375", 00:13:20.760 "assigned_rate_limits": { 00:13:20.760 "rw_ios_per_sec": 0, 00:13:20.760 "rw_mbytes_per_sec": 0, 00:13:20.760 "r_mbytes_per_sec": 0, 00:13:20.760 "w_mbytes_per_sec": 0 00:13:20.760 }, 00:13:20.760 "claimed": true, 00:13:20.760 "claim_type": "exclusive_write", 00:13:20.760 "zoned": false, 00:13:20.760 "supported_io_types": { 00:13:20.760 "read": true, 00:13:20.760 "write": true, 00:13:20.760 "unmap": true, 00:13:20.760 "flush": true, 00:13:20.760 "reset": true, 00:13:20.760 "nvme_admin": false, 00:13:20.760 "nvme_io": false, 00:13:20.760 "nvme_io_md": false, 00:13:20.760 "write_zeroes": true, 00:13:20.760 "zcopy": true, 00:13:20.760 "get_zone_info": false, 00:13:20.760 "zone_management": false, 00:13:20.760 "zone_append": false, 00:13:20.760 "compare": false, 00:13:20.760 "compare_and_write": false, 00:13:20.760 "abort": true, 00:13:20.760 "seek_hole": false, 00:13:20.760 "seek_data": false, 00:13:20.760 "copy": true, 00:13:20.760 "nvme_iov_md": false 00:13:20.760 }, 00:13:20.760 "memory_domains": [ 00:13:20.760 { 00:13:20.760 "dma_device_id": "system", 00:13:20.760 "dma_device_type": 1 00:13:20.760 }, 00:13:20.760 { 00:13:20.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.760 "dma_device_type": 2 00:13:20.760 } 00:13:20.760 ], 00:13:20.760 "driver_specific": {} 00:13:20.760 }, 00:13:20.760 { 00:13:20.760 "name": "Passthru0", 00:13:20.760 "aliases": [ 00:13:20.760 "348364ec-548f-570d-94be-ce89df1b6e00" 00:13:20.760 ], 00:13:20.760 "product_name": "passthru", 00:13:20.760 "block_size": 512, 00:13:20.760 "num_blocks": 16384, 00:13:20.760 "uuid": "348364ec-548f-570d-94be-ce89df1b6e00", 00:13:20.760 "assigned_rate_limits": { 00:13:20.760 "rw_ios_per_sec": 0, 00:13:20.760 "rw_mbytes_per_sec": 0, 00:13:20.760 "r_mbytes_per_sec": 0, 00:13:20.760 "w_mbytes_per_sec": 0 00:13:20.760 }, 00:13:20.760 "claimed": false, 00:13:20.760 "zoned": false, 00:13:20.760 "supported_io_types": { 00:13:20.760 "read": true, 00:13:20.760 "write": true, 00:13:20.760 "unmap": true, 00:13:20.760 "flush": true, 00:13:20.760 "reset": true, 00:13:20.760 "nvme_admin": false, 00:13:20.760 "nvme_io": false, 00:13:20.760 "nvme_io_md": false, 00:13:20.760 "write_zeroes": true, 00:13:20.760 "zcopy": true, 00:13:20.760 "get_zone_info": false, 00:13:20.760 "zone_management": false, 00:13:20.760 "zone_append": false, 00:13:20.760 "compare": false, 00:13:20.760 "compare_and_write": false, 00:13:20.760 "abort": true, 00:13:20.760 "seek_hole": false, 00:13:20.760 "seek_data": false, 00:13:20.760 "copy": true, 00:13:20.760 "nvme_iov_md": false 00:13:20.760 }, 00:13:20.760 "memory_domains": [ 00:13:20.760 { 00:13:20.760 "dma_device_id": "system", 00:13:20.760 "dma_device_type": 1 00:13:20.760 }, 00:13:20.760 { 00:13:20.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.760 "dma_device_type": 2 00:13:20.760 } 00:13:20.760 ], 00:13:20.760 "driver_specific": { 00:13:20.760 "passthru": { 00:13:20.760 "name": "Passthru0", 00:13:20.760 "base_bdev_name": "Malloc2" 00:13:20.760 } 00:13:20.760 } 00:13:20.760 } 00:13:20.760 ]' 00:13:20.760 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:13:20.760 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:13:20.760 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:13:20.760 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.760 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:21.019 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.019 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:21.019 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.019 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:21.019 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.019 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:21.019 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.019 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:21.019 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.019 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:13:21.019 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:13:21.019 ************************************ 00:13:21.019 END TEST rpc_daemon_integrity 00:13:21.019 ************************************ 00:13:21.019 12:15:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:13:21.019 00:13:21.019 real 0m0.394s 00:13:21.019 user 0m0.196s 00:13:21.019 sys 0m0.072s 00:13:21.019 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:21.019 12:15:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:21.019 12:15:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:13:21.019 12:15:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:13:21.019 12:15:30 rpc -- rpc/rpc.sh@84 -- # killprocess 62102 00:13:21.019 12:15:30 rpc -- common/autotest_common.sh@948 -- # '[' -z 62102 ']' 00:13:21.019 12:15:30 rpc -- common/autotest_common.sh@952 -- # kill -0 62102 00:13:21.019 12:15:30 rpc -- common/autotest_common.sh@953 -- # uname 00:13:21.019 12:15:30 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:21.019 12:15:30 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62102 00:13:21.019 killing process with pid 62102 00:13:21.019 12:15:30 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:21.019 12:15:30 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:21.019 12:15:30 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62102' 00:13:21.019 12:15:30 rpc -- common/autotest_common.sh@967 -- # kill 62102 00:13:21.019 12:15:30 rpc -- common/autotest_common.sh@972 -- # wait 62102 00:13:24.352 ************************************ 00:13:24.352 END TEST rpc 00:13:24.352 ************************************ 00:13:24.352 00:13:24.352 real 0m6.421s 00:13:24.352 user 0m6.801s 00:13:24.352 sys 0m1.112s 00:13:24.352 12:15:33 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:24.352 12:15:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.352 12:15:33 -- common/autotest_common.sh@1142 -- # return 0 00:13:24.352 12:15:33 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:13:24.352 12:15:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:24.352 12:15:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.352 12:15:33 -- common/autotest_common.sh@10 -- # set +x 00:13:24.352 ************************************ 00:13:24.352 START TEST skip_rpc 00:13:24.352 ************************************ 00:13:24.352 12:15:33 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:13:24.352 * Looking for test storage... 00:13:24.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:13:24.352 12:15:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:24.352 12:15:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:24.352 12:15:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:13:24.352 12:15:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:24.352 12:15:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.352 12:15:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.352 ************************************ 00:13:24.352 START TEST skip_rpc 00:13:24.352 ************************************ 00:13:24.352 12:15:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:13:24.352 12:15:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62334 00:13:24.352 12:15:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:13:24.352 12:15:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:24.352 12:15:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:13:24.610 [2024-07-10 12:15:33.854377] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:13:24.610 [2024-07-10 12:15:33.854780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62334 ] 00:13:24.610 [2024-07-10 12:15:34.027690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.868 [2024-07-10 12:15:34.289381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.155 12:15:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:13:30.155 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:30.155 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:13:30.155 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:13:30.155 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.155 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:13:30.155 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62334 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 62334 ']' 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 62334 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62334 00:13:30.156 killing process with pid 62334 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62334' 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 62334 00:13:30.156 12:15:38 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 62334 00:13:32.731 ************************************ 00:13:32.731 END TEST skip_rpc 00:13:32.731 ************************************ 00:13:32.731 00:13:32.731 real 0m7.931s 00:13:32.731 user 0m7.266s 00:13:32.731 sys 0m0.565s 00:13:32.731 12:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:32.731 12:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.731 12:15:41 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:13:32.731 12:15:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:13:32.731 12:15:41 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:32.731 12:15:41 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:32.731 12:15:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.731 ************************************ 00:13:32.731 START TEST skip_rpc_with_json 00:13:32.731 ************************************ 00:13:32.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.731 12:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:13:32.731 12:15:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:13:32.731 12:15:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62444 00:13:32.731 12:15:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:32.731 12:15:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62444 00:13:32.731 12:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 62444 ']' 00:13:32.731 12:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.731 12:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.731 12:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.731 12:15:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:32.731 12:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.731 12:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:32.731 [2024-07-10 12:15:41.860148] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:13:32.731 [2024-07-10 12:15:41.860305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62444 ] 00:13:32.731 [2024-07-10 12:15:42.038421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.991 [2024-07-10 12:15:42.292513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:34.370 [2024-07-10 12:15:43.578393] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:13:34.370 request: 00:13:34.370 { 00:13:34.370 "trtype": "tcp", 00:13:34.370 "method": "nvmf_get_transports", 00:13:34.370 "req_id": 1 00:13:34.370 } 00:13:34.370 Got JSON-RPC error response 00:13:34.370 response: 00:13:34.370 { 00:13:34.370 "code": -19, 00:13:34.370 "message": "No such device" 00:13:34.370 } 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:34.370 [2024-07-10 12:15:43.590466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.370 12:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:34.370 { 00:13:34.370 "subsystems": [ 00:13:34.370 { 00:13:34.370 "subsystem": "keyring", 00:13:34.370 "config": [] 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "subsystem": "iobuf", 00:13:34.370 "config": [ 00:13:34.370 { 00:13:34.370 "method": "iobuf_set_options", 00:13:34.370 "params": { 00:13:34.370 "small_pool_count": 8192, 00:13:34.370 "large_pool_count": 1024, 00:13:34.370 "small_bufsize": 8192, 00:13:34.370 "large_bufsize": 135168 00:13:34.370 } 00:13:34.370 } 00:13:34.370 ] 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "subsystem": "sock", 00:13:34.370 "config": [ 00:13:34.370 { 00:13:34.370 "method": "sock_set_default_impl", 00:13:34.370 "params": { 00:13:34.370 "impl_name": "posix" 00:13:34.370 } 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "method": "sock_impl_set_options", 00:13:34.370 "params": { 00:13:34.370 "impl_name": "ssl", 00:13:34.370 "recv_buf_size": 4096, 00:13:34.370 "send_buf_size": 4096, 00:13:34.370 "enable_recv_pipe": true, 00:13:34.370 "enable_quickack": false, 00:13:34.370 "enable_placement_id": 0, 00:13:34.370 "enable_zerocopy_send_server": true, 00:13:34.370 "enable_zerocopy_send_client": false, 00:13:34.370 "zerocopy_threshold": 0, 00:13:34.370 "tls_version": 0, 00:13:34.370 "enable_ktls": false 00:13:34.370 } 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "method": "sock_impl_set_options", 00:13:34.370 "params": { 00:13:34.370 "impl_name": "posix", 00:13:34.370 "recv_buf_size": 2097152, 00:13:34.370 "send_buf_size": 2097152, 00:13:34.370 "enable_recv_pipe": true, 00:13:34.370 "enable_quickack": false, 00:13:34.370 "enable_placement_id": 0, 00:13:34.370 "enable_zerocopy_send_server": true, 00:13:34.370 "enable_zerocopy_send_client": false, 00:13:34.370 "zerocopy_threshold": 0, 00:13:34.370 "tls_version": 0, 00:13:34.370 "enable_ktls": false 00:13:34.370 } 00:13:34.370 } 00:13:34.370 ] 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "subsystem": "vmd", 00:13:34.370 "config": [] 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "subsystem": "accel", 00:13:34.370 "config": [ 00:13:34.370 { 00:13:34.370 "method": "accel_set_options", 00:13:34.370 "params": { 00:13:34.370 "small_cache_size": 128, 00:13:34.370 "large_cache_size": 16, 00:13:34.370 "task_count": 2048, 00:13:34.370 "sequence_count": 2048, 00:13:34.370 "buf_count": 2048 00:13:34.370 } 00:13:34.370 } 00:13:34.370 ] 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "subsystem": "bdev", 00:13:34.370 "config": [ 00:13:34.370 { 00:13:34.370 "method": "bdev_set_options", 00:13:34.370 "params": { 00:13:34.370 "bdev_io_pool_size": 65535, 00:13:34.370 "bdev_io_cache_size": 256, 00:13:34.370 "bdev_auto_examine": true, 00:13:34.370 "iobuf_small_cache_size": 128, 00:13:34.370 "iobuf_large_cache_size": 16 00:13:34.370 } 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "method": "bdev_raid_set_options", 00:13:34.370 "params": { 00:13:34.370 "process_window_size_kb": 1024 00:13:34.370 } 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "method": "bdev_iscsi_set_options", 00:13:34.370 "params": { 00:13:34.370 "timeout_sec": 30 00:13:34.370 } 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "method": "bdev_nvme_set_options", 00:13:34.370 "params": { 00:13:34.370 "action_on_timeout": "none", 00:13:34.370 "timeout_us": 0, 00:13:34.370 "timeout_admin_us": 0, 00:13:34.370 "keep_alive_timeout_ms": 10000, 00:13:34.370 "arbitration_burst": 0, 00:13:34.370 "low_priority_weight": 0, 00:13:34.370 "medium_priority_weight": 0, 00:13:34.370 "high_priority_weight": 0, 00:13:34.370 "nvme_adminq_poll_period_us": 10000, 00:13:34.370 "nvme_ioq_poll_period_us": 0, 00:13:34.370 "io_queue_requests": 0, 00:13:34.370 "delay_cmd_submit": true, 00:13:34.370 "transport_retry_count": 4, 00:13:34.370 "bdev_retry_count": 3, 00:13:34.370 "transport_ack_timeout": 0, 00:13:34.370 "ctrlr_loss_timeout_sec": 0, 00:13:34.370 "reconnect_delay_sec": 0, 00:13:34.370 "fast_io_fail_timeout_sec": 0, 00:13:34.370 "disable_auto_failback": false, 00:13:34.370 "generate_uuids": false, 00:13:34.370 "transport_tos": 0, 00:13:34.370 "nvme_error_stat": false, 00:13:34.370 "rdma_srq_size": 0, 00:13:34.370 "io_path_stat": false, 00:13:34.370 "allow_accel_sequence": false, 00:13:34.370 "rdma_max_cq_size": 0, 00:13:34.370 "rdma_cm_event_timeout_ms": 0, 00:13:34.370 "dhchap_digests": [ 00:13:34.370 "sha256", 00:13:34.370 "sha384", 00:13:34.370 "sha512" 00:13:34.370 ], 00:13:34.370 "dhchap_dhgroups": [ 00:13:34.370 "null", 00:13:34.370 "ffdhe2048", 00:13:34.370 "ffdhe3072", 00:13:34.370 "ffdhe4096", 00:13:34.370 "ffdhe6144", 00:13:34.370 "ffdhe8192" 00:13:34.370 ] 00:13:34.370 } 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "method": "bdev_nvme_set_hotplug", 00:13:34.370 "params": { 00:13:34.370 "period_us": 100000, 00:13:34.370 "enable": false 00:13:34.370 } 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "method": "bdev_wait_for_examine" 00:13:34.370 } 00:13:34.370 ] 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "subsystem": "scsi", 00:13:34.370 "config": null 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "subsystem": "scheduler", 00:13:34.370 "config": [ 00:13:34.370 { 00:13:34.370 "method": "framework_set_scheduler", 00:13:34.370 "params": { 00:13:34.370 "name": "static" 00:13:34.370 } 00:13:34.370 } 00:13:34.370 ] 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "subsystem": "vhost_scsi", 00:13:34.370 "config": [] 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "subsystem": "vhost_blk", 00:13:34.370 "config": [] 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "subsystem": "ublk", 00:13:34.370 "config": [] 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "subsystem": "nbd", 00:13:34.370 "config": [] 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "subsystem": "nvmf", 00:13:34.370 "config": [ 00:13:34.370 { 00:13:34.370 "method": "nvmf_set_config", 00:13:34.370 "params": { 00:13:34.370 "discovery_filter": "match_any", 00:13:34.370 "admin_cmd_passthru": { 00:13:34.370 "identify_ctrlr": false 00:13:34.370 } 00:13:34.370 } 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "method": "nvmf_set_max_subsystems", 00:13:34.370 "params": { 00:13:34.370 "max_subsystems": 1024 00:13:34.370 } 00:13:34.370 }, 00:13:34.370 { 00:13:34.370 "method": "nvmf_set_crdt", 00:13:34.370 "params": { 00:13:34.370 "crdt1": 0, 00:13:34.370 "crdt2": 0, 00:13:34.370 "crdt3": 0 00:13:34.371 } 00:13:34.371 }, 00:13:34.371 { 00:13:34.371 "method": "nvmf_create_transport", 00:13:34.371 "params": { 00:13:34.371 "trtype": "TCP", 00:13:34.371 "max_queue_depth": 128, 00:13:34.371 "max_io_qpairs_per_ctrlr": 127, 00:13:34.371 "in_capsule_data_size": 4096, 00:13:34.371 "max_io_size": 131072, 00:13:34.371 "io_unit_size": 131072, 00:13:34.371 "max_aq_depth": 128, 00:13:34.371 "num_shared_buffers": 511, 00:13:34.371 "buf_cache_size": 4294967295, 00:13:34.371 "dif_insert_or_strip": false, 00:13:34.371 "zcopy": false, 00:13:34.371 "c2h_success": true, 00:13:34.371 "sock_priority": 0, 00:13:34.371 "abort_timeout_sec": 1, 00:13:34.371 "ack_timeout": 0, 00:13:34.371 "data_wr_pool_size": 0 00:13:34.371 } 00:13:34.371 } 00:13:34.371 ] 00:13:34.371 }, 00:13:34.371 { 00:13:34.371 "subsystem": "iscsi", 00:13:34.371 "config": [ 00:13:34.371 { 00:13:34.371 "method": "iscsi_set_options", 00:13:34.371 "params": { 00:13:34.371 "node_base": "iqn.2016-06.io.spdk", 00:13:34.371 "max_sessions": 128, 00:13:34.371 "max_connections_per_session": 2, 00:13:34.371 "max_queue_depth": 64, 00:13:34.371 "default_time2wait": 2, 00:13:34.371 "default_time2retain": 20, 00:13:34.371 "first_burst_length": 8192, 00:13:34.371 "immediate_data": true, 00:13:34.371 "allow_duplicated_isid": false, 00:13:34.371 "error_recovery_level": 0, 00:13:34.371 "nop_timeout": 60, 00:13:34.371 "nop_in_interval": 30, 00:13:34.371 "disable_chap": false, 00:13:34.371 "require_chap": false, 00:13:34.371 "mutual_chap": false, 00:13:34.371 "chap_group": 0, 00:13:34.371 "max_large_datain_per_connection": 64, 00:13:34.371 "max_r2t_per_connection": 4, 00:13:34.371 "pdu_pool_size": 36864, 00:13:34.371 "immediate_data_pool_size": 16384, 00:13:34.371 "data_out_pool_size": 2048 00:13:34.371 } 00:13:34.371 } 00:13:34.371 ] 00:13:34.371 } 00:13:34.371 ] 00:13:34.371 } 00:13:34.371 12:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:34.371 12:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62444 00:13:34.371 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62444 ']' 00:13:34.371 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62444 00:13:34.371 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:13:34.371 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:34.371 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62444 00:13:34.371 killing process with pid 62444 00:13:34.371 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:34.371 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:34.371 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62444' 00:13:34.371 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62444 00:13:34.371 12:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62444 00:13:37.719 12:15:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62511 00:13:37.719 12:15:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:13:37.719 12:15:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:43.075 12:15:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62511 00:13:43.075 12:15:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62511 ']' 00:13:43.075 12:15:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62511 00:13:43.075 12:15:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:13:43.075 12:15:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.075 12:15:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62511 00:13:43.075 killing process with pid 62511 00:13:43.075 12:15:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:43.075 12:15:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:43.075 12:15:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62511' 00:13:43.075 12:15:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62511 00:13:43.075 12:15:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62511 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:45.606 00:13:45.606 real 0m13.018s 00:13:45.606 user 0m11.930s 00:13:45.606 sys 0m1.382s 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:45.606 ************************************ 00:13:45.606 END TEST skip_rpc_with_json 00:13:45.606 ************************************ 00:13:45.606 12:15:54 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:13:45.606 12:15:54 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:13:45.606 12:15:54 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:45.606 12:15:54 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.606 12:15:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.606 ************************************ 00:13:45.606 START TEST skip_rpc_with_delay 00:13:45.606 ************************************ 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:13:45.606 12:15:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:45.606 [2024-07-10 12:15:54.950649] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:13:45.606 [2024-07-10 12:15:54.950850] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:13:45.606 12:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:13:45.606 12:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:45.606 12:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:45.606 12:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:45.606 00:13:45.606 real 0m0.187s 00:13:45.606 user 0m0.092s 00:13:45.606 sys 0m0.092s 00:13:45.606 12:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:45.606 ************************************ 00:13:45.606 END TEST skip_rpc_with_delay 00:13:45.606 ************************************ 00:13:45.606 12:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:13:45.606 12:15:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:13:45.871 12:15:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:13:45.871 12:15:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:13:45.871 12:15:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:13:45.871 12:15:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:45.871 12:15:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.871 12:15:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.871 ************************************ 00:13:45.871 START TEST exit_on_failed_rpc_init 00:13:45.871 ************************************ 00:13:45.871 12:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:13:45.871 12:15:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62650 00:13:45.871 12:15:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:45.871 12:15:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62650 00:13:45.871 12:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 62650 ']' 00:13:45.871 12:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.871 12:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.871 12:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.871 12:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.871 12:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:13:45.871 [2024-07-10 12:15:55.225445] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:13:45.871 [2024-07-10 12:15:55.225612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62650 ] 00:13:46.130 [2024-07-10 12:15:55.401718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.390 [2024-07-10 12:15:55.657490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.326 12:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.326 12:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:13:47.326 12:15:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:47.326 12:15:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:47.326 12:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:13:47.326 12:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:47.326 12:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:47.585 12:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.585 12:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:47.585 12:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.585 12:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:47.585 12:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.585 12:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:47.585 12:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:13:47.585 12:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:47.585 [2024-07-10 12:15:56.937095] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:13:47.586 [2024-07-10 12:15:56.937274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62673 ] 00:13:47.843 [2024-07-10 12:15:57.101659] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.101 [2024-07-10 12:15:57.459673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.102 [2024-07-10 12:15:57.459826] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:13:48.102 [2024-07-10 12:15:57.459850] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:13:48.102 [2024-07-10 12:15:57.459868] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62650 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 62650 ']' 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 62650 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62650 00:13:48.677 killing process with pid 62650 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62650' 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 62650 00:13:48.677 12:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 62650 00:13:51.965 00:13:51.965 real 0m5.744s 00:13:51.965 user 0m6.353s 00:13:51.965 sys 0m0.861s 00:13:51.965 ************************************ 00:13:51.965 END TEST exit_on_failed_rpc_init 00:13:51.965 ************************************ 00:13:51.965 12:16:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:51.965 12:16:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:13:51.965 12:16:00 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:13:51.965 12:16:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:51.965 00:13:51.965 real 0m27.311s 00:13:51.965 user 0m25.761s 00:13:51.965 sys 0m3.193s 00:13:51.965 ************************************ 00:13:51.965 END TEST skip_rpc 00:13:51.965 ************************************ 00:13:51.965 12:16:00 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:51.966 12:16:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.966 12:16:00 -- common/autotest_common.sh@1142 -- # return 0 00:13:51.966 12:16:00 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:13:51.966 12:16:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:51.966 12:16:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.966 12:16:00 -- common/autotest_common.sh@10 -- # set +x 00:13:51.966 ************************************ 00:13:51.966 START TEST rpc_client 00:13:51.966 ************************************ 00:13:51.966 12:16:00 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:13:51.966 * Looking for test storage... 00:13:51.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:13:51.966 12:16:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:13:51.966 OK 00:13:51.966 12:16:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:13:51.966 00:13:51.966 real 0m0.184s 00:13:51.966 user 0m0.074s 00:13:51.966 sys 0m0.116s 00:13:51.966 12:16:01 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:51.966 12:16:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:13:51.966 ************************************ 00:13:51.966 END TEST rpc_client 00:13:51.966 ************************************ 00:13:51.966 12:16:01 -- common/autotest_common.sh@1142 -- # return 0 00:13:51.966 12:16:01 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:13:51.966 12:16:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:51.966 12:16:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.966 12:16:01 -- common/autotest_common.sh@10 -- # set +x 00:13:51.966 ************************************ 00:13:51.966 START TEST json_config 00:13:51.966 ************************************ 00:13:51.966 12:16:01 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:13:51.966 12:16:01 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02a694d1-0c30-4741-8b3e-64bbf390c556 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=02a694d1-0c30-4741-8b3e-64bbf390c556 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:51.966 12:16:01 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.966 12:16:01 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.966 12:16:01 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.966 12:16:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.966 12:16:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.966 12:16:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.966 12:16:01 json_config -- paths/export.sh@5 -- # export PATH 00:13:51.966 12:16:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@47 -- # : 0 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.966 12:16:01 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.966 WARNING: No tests are enabled so not running JSON configuration tests 00:13:51.966 12:16:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:13:51.966 12:16:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:13:51.966 12:16:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:13:51.966 12:16:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:13:51.966 12:16:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:13:51.966 12:16:01 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:13:51.966 12:16:01 json_config -- json_config/json_config.sh@28 -- # exit 0 00:13:51.966 00:13:51.966 real 0m0.121s 00:13:51.966 user 0m0.061s 00:13:51.966 sys 0m0.059s 00:13:51.966 ************************************ 00:13:51.966 END TEST json_config 00:13:51.966 ************************************ 00:13:51.966 12:16:01 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:51.966 12:16:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:51.966 12:16:01 -- common/autotest_common.sh@1142 -- # return 0 00:13:51.966 12:16:01 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:13:51.966 12:16:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:51.966 12:16:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.966 12:16:01 -- common/autotest_common.sh@10 -- # set +x 00:13:51.966 ************************************ 00:13:51.966 START TEST json_config_extra_key 00:13:51.966 ************************************ 00:13:51.966 12:16:01 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:13:52.225 12:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02a694d1-0c30-4741-8b3e-64bbf390c556 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=02a694d1-0c30-4741-8b3e-64bbf390c556 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.225 12:16:01 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:52.225 12:16:01 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.226 12:16:01 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.226 12:16:01 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.226 12:16:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.226 12:16:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.226 12:16:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.226 12:16:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:13:52.226 12:16:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.226 12:16:01 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:13:52.226 12:16:01 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:52.226 12:16:01 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:52.226 12:16:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.226 12:16:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.226 12:16:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.226 12:16:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:52.226 12:16:01 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:52.226 12:16:01 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:52.226 12:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:13:52.226 12:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:13:52.226 12:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:13:52.226 12:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:13:52.226 12:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:13:52.226 INFO: launching applications... 00:13:52.226 12:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:13:52.226 12:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:13:52.226 12:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:13:52.226 12:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:13:52.226 12:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:13:52.226 12:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:13:52.226 12:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:13:52.226 12:16:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:13:52.226 12:16:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:13:52.226 12:16:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:13:52.226 12:16:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:13:52.226 12:16:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:13:52.226 12:16:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:52.226 12:16:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:52.226 12:16:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62866 00:13:52.226 12:16:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:13:52.226 12:16:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:13:52.226 Waiting for target to run... 00:13:52.226 12:16:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62866 /var/tmp/spdk_tgt.sock 00:13:52.226 12:16:01 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 62866 ']' 00:13:52.226 12:16:01 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:13:52.226 12:16:01 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:13:52.226 12:16:01 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:13:52.226 12:16:01 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.226 12:16:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:13:52.226 [2024-07-10 12:16:01.619124] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:13:52.226 [2024-07-10 12:16:01.619303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62866 ] 00:13:52.793 [2024-07-10 12:16:02.211166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.053 [2024-07-10 12:16:02.483134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.990 00:13:53.990 INFO: shutting down applications... 00:13:53.990 12:16:03 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.990 12:16:03 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:13:53.990 12:16:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:13:53.990 12:16:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:13:53.990 12:16:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:13:53.990 12:16:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:13:53.990 12:16:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:13:53.990 12:16:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62866 ]] 00:13:53.990 12:16:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62866 00:13:53.990 12:16:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:13:53.990 12:16:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:53.990 12:16:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62866 00:13:53.990 12:16:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:54.557 12:16:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:54.557 12:16:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:54.557 12:16:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62866 00:13:54.557 12:16:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:55.124 12:16:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:55.124 12:16:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:55.124 12:16:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62866 00:13:55.124 12:16:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:55.692 12:16:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:55.692 12:16:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:55.692 12:16:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62866 00:13:55.692 12:16:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:55.950 12:16:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:55.950 12:16:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:55.951 12:16:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62866 00:13:55.951 12:16:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:56.518 12:16:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:56.518 12:16:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:56.518 12:16:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62866 00:13:56.518 12:16:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:57.085 12:16:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:57.085 12:16:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:57.085 12:16:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62866 00:13:57.085 12:16:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:57.653 12:16:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:57.653 12:16:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:57.653 12:16:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62866 00:13:57.653 12:16:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:58.220 12:16:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:58.220 12:16:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:58.220 12:16:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62866 00:13:58.221 12:16:07 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:13:58.221 12:16:07 json_config_extra_key -- json_config/common.sh@43 -- # break 00:13:58.221 12:16:07 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:13:58.221 12:16:07 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:13:58.221 SPDK target shutdown done 00:13:58.221 Success 00:13:58.221 12:16:07 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:13:58.221 ************************************ 00:13:58.221 END TEST json_config_extra_key 00:13:58.221 ************************************ 00:13:58.221 00:13:58.221 real 0m5.996s 00:13:58.221 user 0m5.262s 00:13:58.221 sys 0m0.824s 00:13:58.221 12:16:07 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:58.221 12:16:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:13:58.221 12:16:07 -- common/autotest_common.sh@1142 -- # return 0 00:13:58.221 12:16:07 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:13:58.221 12:16:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:58.221 12:16:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.221 12:16:07 -- common/autotest_common.sh@10 -- # set +x 00:13:58.221 ************************************ 00:13:58.221 START TEST alias_rpc 00:13:58.221 ************************************ 00:13:58.221 12:16:07 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:13:58.221 * Looking for test storage... 00:13:58.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:13:58.221 12:16:07 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:58.221 12:16:07 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62990 00:13:58.221 12:16:07 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62990 00:13:58.221 12:16:07 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 62990 ']' 00:13:58.221 12:16:07 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.221 12:16:07 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.221 12:16:07 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.221 12:16:07 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.221 12:16:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.221 12:16:07 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:58.482 [2024-07-10 12:16:07.736563] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:13:58.482 [2024-07-10 12:16:07.736754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62990 ] 00:13:58.482 [2024-07-10 12:16:07.902523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.048 [2024-07-10 12:16:08.228361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.984 12:16:09 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.984 12:16:09 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:59.984 12:16:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:14:00.243 12:16:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62990 00:14:00.243 12:16:09 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 62990 ']' 00:14:00.243 12:16:09 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 62990 00:14:00.243 12:16:09 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:14:00.243 12:16:09 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:00.243 12:16:09 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62990 00:14:00.243 killing process with pid 62990 00:14:00.243 12:16:09 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:00.243 12:16:09 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:00.243 12:16:09 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62990' 00:14:00.243 12:16:09 alias_rpc -- common/autotest_common.sh@967 -- # kill 62990 00:14:00.243 12:16:09 alias_rpc -- common/autotest_common.sh@972 -- # wait 62990 00:14:03.527 ************************************ 00:14:03.527 END TEST alias_rpc 00:14:03.527 ************************************ 00:14:03.527 00:14:03.527 real 0m5.280s 00:14:03.527 user 0m5.103s 00:14:03.527 sys 0m0.791s 00:14:03.527 12:16:12 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:03.527 12:16:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.527 12:16:12 -- common/autotest_common.sh@1142 -- # return 0 00:14:03.527 12:16:12 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:14:03.527 12:16:12 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:14:03.527 12:16:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:03.527 12:16:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.527 12:16:12 -- common/autotest_common.sh@10 -- # set +x 00:14:03.527 ************************************ 00:14:03.527 START TEST spdkcli_tcp 00:14:03.527 ************************************ 00:14:03.527 12:16:12 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:14:03.527 * Looking for test storage... 00:14:03.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:14:03.527 12:16:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:14:03.527 12:16:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:14:03.527 12:16:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:14:03.527 12:16:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:14:03.527 12:16:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:14:03.527 12:16:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:03.527 12:16:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:14:03.527 12:16:12 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:03.527 12:16:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:03.527 12:16:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=63102 00:14:03.527 12:16:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:14:03.527 12:16:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 63102 00:14:03.527 12:16:12 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 63102 ']' 00:14:03.527 12:16:12 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.527 12:16:12 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.527 12:16:12 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.527 12:16:12 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.527 12:16:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:03.784 [2024-07-10 12:16:13.088136] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:14:03.784 [2024-07-10 12:16:13.088578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63102 ] 00:14:04.042 [2024-07-10 12:16:13.269024] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:04.299 [2024-07-10 12:16:13.586424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.299 [2024-07-10 12:16:13.586460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.671 12:16:14 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.671 12:16:14 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:14:05.671 12:16:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:14:05.671 12:16:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=63125 00:14:05.671 12:16:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:14:05.671 [ 00:14:05.671 "bdev_malloc_delete", 00:14:05.671 "bdev_malloc_create", 00:14:05.671 "bdev_null_resize", 00:14:05.671 "bdev_null_delete", 00:14:05.671 "bdev_null_create", 00:14:05.671 "bdev_nvme_cuse_unregister", 00:14:05.671 "bdev_nvme_cuse_register", 00:14:05.671 "bdev_opal_new_user", 00:14:05.671 "bdev_opal_set_lock_state", 00:14:05.671 "bdev_opal_delete", 00:14:05.671 "bdev_opal_get_info", 00:14:05.671 "bdev_opal_create", 00:14:05.671 "bdev_nvme_opal_revert", 00:14:05.671 "bdev_nvme_opal_init", 00:14:05.671 "bdev_nvme_send_cmd", 00:14:05.671 "bdev_nvme_get_path_iostat", 00:14:05.671 "bdev_nvme_get_mdns_discovery_info", 00:14:05.671 "bdev_nvme_stop_mdns_discovery", 00:14:05.671 "bdev_nvme_start_mdns_discovery", 00:14:05.671 "bdev_nvme_set_multipath_policy", 00:14:05.671 "bdev_nvme_set_preferred_path", 00:14:05.671 "bdev_nvme_get_io_paths", 00:14:05.671 "bdev_nvme_remove_error_injection", 00:14:05.671 "bdev_nvme_add_error_injection", 00:14:05.671 "bdev_nvme_get_discovery_info", 00:14:05.671 "bdev_nvme_stop_discovery", 00:14:05.671 "bdev_nvme_start_discovery", 00:14:05.671 "bdev_nvme_get_controller_health_info", 00:14:05.671 "bdev_nvme_disable_controller", 00:14:05.671 "bdev_nvme_enable_controller", 00:14:05.671 "bdev_nvme_reset_controller", 00:14:05.671 "bdev_nvme_get_transport_statistics", 00:14:05.671 "bdev_nvme_apply_firmware", 00:14:05.671 "bdev_nvme_detach_controller", 00:14:05.671 "bdev_nvme_get_controllers", 00:14:05.671 "bdev_nvme_attach_controller", 00:14:05.671 "bdev_nvme_set_hotplug", 00:14:05.671 "bdev_nvme_set_options", 00:14:05.671 "bdev_passthru_delete", 00:14:05.671 "bdev_passthru_create", 00:14:05.671 "bdev_lvol_set_parent_bdev", 00:14:05.671 "bdev_lvol_set_parent", 00:14:05.671 "bdev_lvol_check_shallow_copy", 00:14:05.671 "bdev_lvol_start_shallow_copy", 00:14:05.671 "bdev_lvol_grow_lvstore", 00:14:05.672 "bdev_lvol_get_lvols", 00:14:05.672 "bdev_lvol_get_lvstores", 00:14:05.672 "bdev_lvol_delete", 00:14:05.672 "bdev_lvol_set_read_only", 00:14:05.672 "bdev_lvol_resize", 00:14:05.672 "bdev_lvol_decouple_parent", 00:14:05.672 "bdev_lvol_inflate", 00:14:05.672 "bdev_lvol_rename", 00:14:05.672 "bdev_lvol_clone_bdev", 00:14:05.672 "bdev_lvol_clone", 00:14:05.672 "bdev_lvol_snapshot", 00:14:05.672 "bdev_lvol_create", 00:14:05.672 "bdev_lvol_delete_lvstore", 00:14:05.672 "bdev_lvol_rename_lvstore", 00:14:05.672 "bdev_lvol_create_lvstore", 00:14:05.672 "bdev_raid_set_options", 00:14:05.672 "bdev_raid_remove_base_bdev", 00:14:05.672 "bdev_raid_add_base_bdev", 00:14:05.672 "bdev_raid_delete", 00:14:05.672 "bdev_raid_create", 00:14:05.672 "bdev_raid_get_bdevs", 00:14:05.672 "bdev_error_inject_error", 00:14:05.672 "bdev_error_delete", 00:14:05.672 "bdev_error_create", 00:14:05.672 "bdev_split_delete", 00:14:05.672 "bdev_split_create", 00:14:05.672 "bdev_delay_delete", 00:14:05.672 "bdev_delay_create", 00:14:05.672 "bdev_delay_update_latency", 00:14:05.672 "bdev_zone_block_delete", 00:14:05.672 "bdev_zone_block_create", 00:14:05.672 "blobfs_create", 00:14:05.672 "blobfs_detect", 00:14:05.672 "blobfs_set_cache_size", 00:14:05.672 "bdev_xnvme_delete", 00:14:05.672 "bdev_xnvme_create", 00:14:05.672 "bdev_aio_delete", 00:14:05.672 "bdev_aio_rescan", 00:14:05.672 "bdev_aio_create", 00:14:05.672 "bdev_ftl_set_property", 00:14:05.672 "bdev_ftl_get_properties", 00:14:05.672 "bdev_ftl_get_stats", 00:14:05.672 "bdev_ftl_unmap", 00:14:05.672 "bdev_ftl_unload", 00:14:05.672 "bdev_ftl_delete", 00:14:05.672 "bdev_ftl_load", 00:14:05.672 "bdev_ftl_create", 00:14:05.672 "bdev_virtio_attach_controller", 00:14:05.672 "bdev_virtio_scsi_get_devices", 00:14:05.672 "bdev_virtio_detach_controller", 00:14:05.672 "bdev_virtio_blk_set_hotplug", 00:14:05.672 "bdev_iscsi_delete", 00:14:05.672 "bdev_iscsi_create", 00:14:05.672 "bdev_iscsi_set_options", 00:14:05.672 "accel_error_inject_error", 00:14:05.672 "ioat_scan_accel_module", 00:14:05.672 "dsa_scan_accel_module", 00:14:05.672 "iaa_scan_accel_module", 00:14:05.672 "keyring_file_remove_key", 00:14:05.672 "keyring_file_add_key", 00:14:05.672 "keyring_linux_set_options", 00:14:05.672 "iscsi_get_histogram", 00:14:05.672 "iscsi_enable_histogram", 00:14:05.672 "iscsi_set_options", 00:14:05.672 "iscsi_get_auth_groups", 00:14:05.672 "iscsi_auth_group_remove_secret", 00:14:05.672 "iscsi_auth_group_add_secret", 00:14:05.672 "iscsi_delete_auth_group", 00:14:05.672 "iscsi_create_auth_group", 00:14:05.672 "iscsi_set_discovery_auth", 00:14:05.672 "iscsi_get_options", 00:14:05.672 "iscsi_target_node_request_logout", 00:14:05.672 "iscsi_target_node_set_redirect", 00:14:05.672 "iscsi_target_node_set_auth", 00:14:05.672 "iscsi_target_node_add_lun", 00:14:05.672 "iscsi_get_stats", 00:14:05.672 "iscsi_get_connections", 00:14:05.672 "iscsi_portal_group_set_auth", 00:14:05.672 "iscsi_start_portal_group", 00:14:05.672 "iscsi_delete_portal_group", 00:14:05.672 "iscsi_create_portal_group", 00:14:05.672 "iscsi_get_portal_groups", 00:14:05.672 "iscsi_delete_target_node", 00:14:05.672 "iscsi_target_node_remove_pg_ig_maps", 00:14:05.672 "iscsi_target_node_add_pg_ig_maps", 00:14:05.672 "iscsi_create_target_node", 00:14:05.672 "iscsi_get_target_nodes", 00:14:05.672 "iscsi_delete_initiator_group", 00:14:05.672 "iscsi_initiator_group_remove_initiators", 00:14:05.672 "iscsi_initiator_group_add_initiators", 00:14:05.672 "iscsi_create_initiator_group", 00:14:05.672 "iscsi_get_initiator_groups", 00:14:05.672 "nvmf_set_crdt", 00:14:05.672 "nvmf_set_config", 00:14:05.672 "nvmf_set_max_subsystems", 00:14:05.672 "nvmf_stop_mdns_prr", 00:14:05.672 "nvmf_publish_mdns_prr", 00:14:05.672 "nvmf_subsystem_get_listeners", 00:14:05.672 "nvmf_subsystem_get_qpairs", 00:14:05.672 "nvmf_subsystem_get_controllers", 00:14:05.672 "nvmf_get_stats", 00:14:05.672 "nvmf_get_transports", 00:14:05.672 "nvmf_create_transport", 00:14:05.672 "nvmf_get_targets", 00:14:05.672 "nvmf_delete_target", 00:14:05.672 "nvmf_create_target", 00:14:05.672 "nvmf_subsystem_allow_any_host", 00:14:05.672 "nvmf_subsystem_remove_host", 00:14:05.672 "nvmf_subsystem_add_host", 00:14:05.672 "nvmf_ns_remove_host", 00:14:05.672 "nvmf_ns_add_host", 00:14:05.672 "nvmf_subsystem_remove_ns", 00:14:05.672 "nvmf_subsystem_add_ns", 00:14:05.672 "nvmf_subsystem_listener_set_ana_state", 00:14:05.672 "nvmf_discovery_get_referrals", 00:14:05.672 "nvmf_discovery_remove_referral", 00:14:05.672 "nvmf_discovery_add_referral", 00:14:05.672 "nvmf_subsystem_remove_listener", 00:14:05.672 "nvmf_subsystem_add_listener", 00:14:05.672 "nvmf_delete_subsystem", 00:14:05.672 "nvmf_create_subsystem", 00:14:05.672 "nvmf_get_subsystems", 00:14:05.672 "env_dpdk_get_mem_stats", 00:14:05.672 "nbd_get_disks", 00:14:05.672 "nbd_stop_disk", 00:14:05.672 "nbd_start_disk", 00:14:05.672 "ublk_recover_disk", 00:14:05.672 "ublk_get_disks", 00:14:05.672 "ublk_stop_disk", 00:14:05.672 "ublk_start_disk", 00:14:05.672 "ublk_destroy_target", 00:14:05.672 "ublk_create_target", 00:14:05.672 "virtio_blk_create_transport", 00:14:05.672 "virtio_blk_get_transports", 00:14:05.672 "vhost_controller_set_coalescing", 00:14:05.672 "vhost_get_controllers", 00:14:05.672 "vhost_delete_controller", 00:14:05.672 "vhost_create_blk_controller", 00:14:05.672 "vhost_scsi_controller_remove_target", 00:14:05.672 "vhost_scsi_controller_add_target", 00:14:05.672 "vhost_start_scsi_controller", 00:14:05.672 "vhost_create_scsi_controller", 00:14:05.672 "thread_set_cpumask", 00:14:05.672 "framework_get_governor", 00:14:05.672 "framework_get_scheduler", 00:14:05.672 "framework_set_scheduler", 00:14:05.672 "framework_get_reactors", 00:14:05.672 "thread_get_io_channels", 00:14:05.672 "thread_get_pollers", 00:14:05.672 "thread_get_stats", 00:14:05.672 "framework_monitor_context_switch", 00:14:05.672 "spdk_kill_instance", 00:14:05.672 "log_enable_timestamps", 00:14:05.672 "log_get_flags", 00:14:05.672 "log_clear_flag", 00:14:05.672 "log_set_flag", 00:14:05.672 "log_get_level", 00:14:05.672 "log_set_level", 00:14:05.672 "log_get_print_level", 00:14:05.672 "log_set_print_level", 00:14:05.672 "framework_enable_cpumask_locks", 00:14:05.672 "framework_disable_cpumask_locks", 00:14:05.672 "framework_wait_init", 00:14:05.672 "framework_start_init", 00:14:05.672 "scsi_get_devices", 00:14:05.672 "bdev_get_histogram", 00:14:05.672 "bdev_enable_histogram", 00:14:05.672 "bdev_set_qos_limit", 00:14:05.672 "bdev_set_qd_sampling_period", 00:14:05.672 "bdev_get_bdevs", 00:14:05.672 "bdev_reset_iostat", 00:14:05.672 "bdev_get_iostat", 00:14:05.672 "bdev_examine", 00:14:05.672 "bdev_wait_for_examine", 00:14:05.672 "bdev_set_options", 00:14:05.672 "notify_get_notifications", 00:14:05.672 "notify_get_types", 00:14:05.672 "accel_get_stats", 00:14:05.672 "accel_set_options", 00:14:05.672 "accel_set_driver", 00:14:05.672 "accel_crypto_key_destroy", 00:14:05.672 "accel_crypto_keys_get", 00:14:05.672 "accel_crypto_key_create", 00:14:05.672 "accel_assign_opc", 00:14:05.672 "accel_get_module_info", 00:14:05.672 "accel_get_opc_assignments", 00:14:05.672 "vmd_rescan", 00:14:05.672 "vmd_remove_device", 00:14:05.672 "vmd_enable", 00:14:05.672 "sock_get_default_impl", 00:14:05.672 "sock_set_default_impl", 00:14:05.672 "sock_impl_set_options", 00:14:05.672 "sock_impl_get_options", 00:14:05.672 "iobuf_get_stats", 00:14:05.672 "iobuf_set_options", 00:14:05.672 "framework_get_pci_devices", 00:14:05.672 "framework_get_config", 00:14:05.672 "framework_get_subsystems", 00:14:05.672 "trace_get_info", 00:14:05.672 "trace_get_tpoint_group_mask", 00:14:05.672 "trace_disable_tpoint_group", 00:14:05.672 "trace_enable_tpoint_group", 00:14:05.672 "trace_clear_tpoint_mask", 00:14:05.672 "trace_set_tpoint_mask", 00:14:05.672 "keyring_get_keys", 00:14:05.672 "spdk_get_version", 00:14:05.672 "rpc_get_methods" 00:14:05.672 ] 00:14:05.672 12:16:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:14:05.672 12:16:14 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:05.672 12:16:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:05.672 12:16:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:05.672 12:16:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 63102 00:14:05.672 12:16:15 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 63102 ']' 00:14:05.672 12:16:15 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 63102 00:14:05.672 12:16:15 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:14:05.672 12:16:15 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.672 12:16:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63102 00:14:05.672 killing process with pid 63102 00:14:05.672 12:16:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:05.672 12:16:15 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:05.672 12:16:15 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63102' 00:14:05.672 12:16:15 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 63102 00:14:05.672 12:16:15 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 63102 00:14:08.975 ************************************ 00:14:08.975 END TEST spdkcli_tcp 00:14:08.975 ************************************ 00:14:08.975 00:14:08.975 real 0m5.138s 00:14:08.975 user 0m8.771s 00:14:08.975 sys 0m0.824s 00:14:08.975 12:16:17 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:08.975 12:16:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:08.975 12:16:18 -- common/autotest_common.sh@1142 -- # return 0 00:14:08.975 12:16:18 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:14:08.975 12:16:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:08.975 12:16:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.975 12:16:18 -- common/autotest_common.sh@10 -- # set +x 00:14:08.975 ************************************ 00:14:08.975 START TEST dpdk_mem_utility 00:14:08.975 ************************************ 00:14:08.975 12:16:18 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:14:08.975 * Looking for test storage... 00:14:08.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:14:08.975 12:16:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:14:08.975 12:16:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63227 00:14:08.975 12:16:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:08.975 12:16:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63227 00:14:08.975 12:16:18 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 63227 ']' 00:14:08.975 12:16:18 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.975 12:16:18 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.975 12:16:18 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.975 12:16:18 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.975 12:16:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:08.975 [2024-07-10 12:16:18.277829] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:14:08.975 [2024-07-10 12:16:18.277985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63227 ] 00:14:09.234 [2024-07-10 12:16:18.454342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.492 [2024-07-10 12:16:18.766462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.502 12:16:19 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:10.502 12:16:19 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:14:10.502 12:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:14:10.502 12:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:14:10.502 12:16:19 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.502 12:16:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:10.502 { 00:14:10.502 "filename": "/tmp/spdk_mem_dump.txt" 00:14:10.502 } 00:14:10.502 12:16:19 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.502 12:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:14:10.762 DPDK memory size 820.000000 MiB in 1 heap(s) 00:14:10.762 1 heaps totaling size 820.000000 MiB 00:14:10.762 size: 820.000000 MiB heap id: 0 00:14:10.762 end heaps---------- 00:14:10.762 8 mempools totaling size 598.116089 MiB 00:14:10.762 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:14:10.762 size: 158.602051 MiB name: PDU_data_out_Pool 00:14:10.762 size: 84.521057 MiB name: bdev_io_63227 00:14:10.762 size: 51.011292 MiB name: evtpool_63227 00:14:10.762 size: 50.003479 MiB name: msgpool_63227 00:14:10.762 size: 21.763794 MiB name: PDU_Pool 00:14:10.762 size: 19.513306 MiB name: SCSI_TASK_Pool 00:14:10.762 size: 0.026123 MiB name: Session_Pool 00:14:10.762 end mempools------- 00:14:10.762 6 memzones totaling size 4.142822 MiB 00:14:10.762 size: 1.000366 MiB name: RG_ring_0_63227 00:14:10.762 size: 1.000366 MiB name: RG_ring_1_63227 00:14:10.762 size: 1.000366 MiB name: RG_ring_4_63227 00:14:10.762 size: 1.000366 MiB name: RG_ring_5_63227 00:14:10.762 size: 0.125366 MiB name: RG_ring_2_63227 00:14:10.762 size: 0.015991 MiB name: RG_ring_3_63227 00:14:10.762 end memzones------- 00:14:10.762 12:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:14:10.762 heap id: 0 total size: 820.000000 MiB number of busy elements: 297 number of free elements: 18 00:14:10.762 list of free elements. size: 18.452271 MiB 00:14:10.762 element at address: 0x200000400000 with size: 1.999451 MiB 00:14:10.762 element at address: 0x200000800000 with size: 1.996887 MiB 00:14:10.762 element at address: 0x200007000000 with size: 1.995972 MiB 00:14:10.762 element at address: 0x20000b200000 with size: 1.995972 MiB 00:14:10.762 element at address: 0x200019100040 with size: 0.999939 MiB 00:14:10.762 element at address: 0x200019500040 with size: 0.999939 MiB 00:14:10.762 element at address: 0x200019600000 with size: 0.999084 MiB 00:14:10.762 element at address: 0x200003e00000 with size: 0.996094 MiB 00:14:10.762 element at address: 0x200032200000 with size: 0.994324 MiB 00:14:10.762 element at address: 0x200018e00000 with size: 0.959656 MiB 00:14:10.762 element at address: 0x200019900040 with size: 0.936401 MiB 00:14:10.762 element at address: 0x200000200000 with size: 0.830200 MiB 00:14:10.762 element at address: 0x20001b000000 with size: 0.564880 MiB 00:14:10.762 element at address: 0x200019200000 with size: 0.487976 MiB 00:14:10.762 element at address: 0x200019a00000 with size: 0.485413 MiB 00:14:10.762 element at address: 0x200013800000 with size: 0.467651 MiB 00:14:10.762 element at address: 0x200028400000 with size: 0.390442 MiB 00:14:10.762 element at address: 0x200003a00000 with size: 0.351990 MiB 00:14:10.762 list of standard malloc elements. size: 199.283325 MiB 00:14:10.763 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:14:10.763 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:14:10.763 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:14:10.763 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:14:10.763 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:14:10.763 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:14:10.763 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:14:10.763 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:14:10.763 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:14:10.763 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:14:10.763 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:14:10.763 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003aff980 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003affa80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200003eff000 with size: 0.000244 MiB 00:14:10.763 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:14:10.763 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:14:10.763 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:14:10.763 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:14:10.763 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:14:10.763 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:14:10.763 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:14:10.763 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:14:10.763 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200013877b80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200013877c80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200013877d80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200013877e80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200013877f80 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200013878080 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200013878180 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200013878280 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200013878380 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200013878480 with size: 0.000244 MiB 00:14:10.763 element at address: 0x200013878580 with size: 0.000244 MiB 00:14:10.764 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:14:10.764 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:14:10.764 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x200019abc680 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:14:10.764 element at address: 0x200028463f40 with size: 0.000244 MiB 00:14:10.764 element at address: 0x200028464040 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846af80 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846b080 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846b180 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846b280 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846b380 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846b480 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846b580 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846b680 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846b780 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846b880 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846b980 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846be80 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846c080 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846c180 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846c280 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846c380 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846c480 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846c580 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846c680 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846c780 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846c880 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846c980 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:14:10.764 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846d080 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846d180 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846d280 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846d380 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846d480 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846d580 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846d680 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846d780 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846d880 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846d980 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846da80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846db80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846de80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846df80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846e080 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846e180 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846e280 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846e380 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846e480 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846e580 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846e680 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846e780 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846e880 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846e980 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846f080 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846f180 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846f280 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846f380 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846f480 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846f580 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846f680 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846f780 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846f880 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846f980 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:14:10.765 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:14:10.765 list of memzone associated elements. size: 602.264404 MiB 00:14:10.765 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:14:10.765 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:14:10.765 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:14:10.765 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:14:10.765 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:14:10.765 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63227_0 00:14:10.765 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:14:10.765 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63227_0 00:14:10.765 element at address: 0x200003fff340 with size: 48.003113 MiB 00:14:10.765 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63227_0 00:14:10.765 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:14:10.765 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:14:10.765 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:14:10.765 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:14:10.765 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:14:10.765 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63227 00:14:10.765 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:14:10.765 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63227 00:14:10.765 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:14:10.765 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63227 00:14:10.765 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:14:10.765 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:14:10.765 element at address: 0x200019abc780 with size: 1.008179 MiB 00:14:10.765 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:14:10.765 element at address: 0x200018efde00 with size: 1.008179 MiB 00:14:10.765 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:14:10.765 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:14:10.765 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:14:10.765 element at address: 0x200003eff100 with size: 1.000549 MiB 00:14:10.765 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63227 00:14:10.765 element at address: 0x200003affb80 with size: 1.000549 MiB 00:14:10.765 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63227 00:14:10.765 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:14:10.765 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63227 00:14:10.765 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:14:10.765 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63227 00:14:10.765 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:14:10.765 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63227 00:14:10.765 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:14:10.765 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:14:10.765 element at address: 0x200013878680 with size: 0.500549 MiB 00:14:10.765 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:14:10.765 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:14:10.765 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:14:10.765 element at address: 0x200003adf740 with size: 0.125549 MiB 00:14:10.765 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63227 00:14:10.765 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:14:10.765 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:14:10.765 element at address: 0x200028464140 with size: 0.023804 MiB 00:14:10.765 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:14:10.765 element at address: 0x200003adb500 with size: 0.016174 MiB 00:14:10.765 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63227 00:14:10.765 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:14:10.765 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:14:10.765 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:14:10.765 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63227 00:14:10.765 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:14:10.765 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63227 00:14:10.765 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:14:10.765 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:14:10.765 12:16:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:14:10.765 12:16:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63227 00:14:10.765 12:16:20 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 63227 ']' 00:14:10.765 12:16:20 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 63227 00:14:10.765 12:16:20 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:14:10.765 12:16:20 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.766 12:16:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63227 00:14:10.766 killing process with pid 63227 00:14:10.766 12:16:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:10.766 12:16:20 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:10.766 12:16:20 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63227' 00:14:10.766 12:16:20 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 63227 00:14:10.766 12:16:20 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 63227 00:14:14.048 00:14:14.048 real 0m5.070s 00:14:14.048 user 0m4.812s 00:14:14.048 sys 0m0.739s 00:14:14.048 12:16:23 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:14.048 12:16:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:14.048 ************************************ 00:14:14.048 END TEST dpdk_mem_utility 00:14:14.048 ************************************ 00:14:14.048 12:16:23 -- common/autotest_common.sh@1142 -- # return 0 00:14:14.048 12:16:23 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:14:14.048 12:16:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:14.048 12:16:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.048 12:16:23 -- common/autotest_common.sh@10 -- # set +x 00:14:14.048 ************************************ 00:14:14.048 START TEST event 00:14:14.048 ************************************ 00:14:14.048 12:16:23 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:14:14.048 * Looking for test storage... 00:14:14.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:14:14.048 12:16:23 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:14.048 12:16:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:14:14.048 12:16:23 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:14:14.048 12:16:23 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:14:14.048 12:16:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.048 12:16:23 event -- common/autotest_common.sh@10 -- # set +x 00:14:14.048 ************************************ 00:14:14.048 START TEST event_perf 00:14:14.048 ************************************ 00:14:14.048 12:16:23 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:14:14.048 Running I/O for 1 seconds...[2024-07-10 12:16:23.373783] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:14:14.048 [2024-07-10 12:16:23.373914] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63339 ] 00:14:14.330 [2024-07-10 12:16:23.551245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.588 [2024-07-10 12:16:23.869541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.588 [2024-07-10 12:16:23.869692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.588 [2024-07-10 12:16:23.869697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.588 Running I/O for 1 seconds...[2024-07-10 12:16:23.869706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.964 00:14:15.964 lcore 0: 175488 00:14:15.964 lcore 1: 175487 00:14:15.964 lcore 2: 175488 00:14:15.964 lcore 3: 175489 00:14:15.964 done. 00:14:15.964 00:14:15.964 real 0m2.101s 00:14:15.964 user 0m4.790s 00:14:15.964 sys 0m0.180s 00:14:15.964 12:16:25 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:15.964 12:16:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:14:15.964 ************************************ 00:14:15.964 END TEST event_perf 00:14:15.964 ************************************ 00:14:16.222 12:16:25 event -- common/autotest_common.sh@1142 -- # return 0 00:14:16.222 12:16:25 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:14:16.222 12:16:25 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:16.222 12:16:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.222 12:16:25 event -- common/autotest_common.sh@10 -- # set +x 00:14:16.222 ************************************ 00:14:16.222 START TEST event_reactor 00:14:16.222 ************************************ 00:14:16.222 12:16:25 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:14:16.222 [2024-07-10 12:16:25.546519] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:14:16.222 [2024-07-10 12:16:25.546663] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63385 ] 00:14:16.480 [2024-07-10 12:16:25.722292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.738 [2024-07-10 12:16:26.033105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.113 test_start 00:14:18.113 oneshot 00:14:18.113 tick 100 00:14:18.113 tick 100 00:14:18.113 tick 250 00:14:18.113 tick 100 00:14:18.113 tick 100 00:14:18.113 tick 100 00:14:18.113 tick 250 00:14:18.113 tick 500 00:14:18.113 tick 100 00:14:18.113 tick 100 00:14:18.113 tick 250 00:14:18.113 tick 100 00:14:18.113 tick 100 00:14:18.113 test_end 00:14:18.113 ************************************ 00:14:18.113 END TEST event_reactor 00:14:18.113 ************************************ 00:14:18.113 00:14:18.113 real 0m2.053s 00:14:18.113 user 0m1.791s 00:14:18.113 sys 0m0.150s 00:14:18.113 12:16:27 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:18.113 12:16:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:14:18.423 12:16:27 event -- common/autotest_common.sh@1142 -- # return 0 00:14:18.423 12:16:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:14:18.423 12:16:27 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:18.423 12:16:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.423 12:16:27 event -- common/autotest_common.sh@10 -- # set +x 00:14:18.423 ************************************ 00:14:18.423 START TEST event_reactor_perf 00:14:18.423 ************************************ 00:14:18.423 12:16:27 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:14:18.423 [2024-07-10 12:16:27.671814] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:14:18.423 [2024-07-10 12:16:27.671956] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63428 ] 00:14:18.423 [2024-07-10 12:16:27.846864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.682 [2024-07-10 12:16:28.148690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.582 test_start 00:14:20.582 test_end 00:14:20.582 Performance: 362395 events per second 00:14:20.582 ************************************ 00:14:20.582 END TEST event_reactor_perf 00:14:20.582 ************************************ 00:14:20.582 00:14:20.582 real 0m2.024s 00:14:20.582 user 0m1.770s 00:14:20.582 sys 0m0.142s 00:14:20.582 12:16:29 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:20.582 12:16:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:14:20.582 12:16:29 event -- common/autotest_common.sh@1142 -- # return 0 00:14:20.582 12:16:29 event -- event/event.sh@49 -- # uname -s 00:14:20.582 12:16:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:14:20.582 12:16:29 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:14:20.582 12:16:29 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:20.582 12:16:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:20.582 12:16:29 event -- common/autotest_common.sh@10 -- # set +x 00:14:20.582 ************************************ 00:14:20.582 START TEST event_scheduler 00:14:20.582 ************************************ 00:14:20.582 12:16:29 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:14:20.582 * Looking for test storage... 00:14:20.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:14:20.582 12:16:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:14:20.582 12:16:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:14:20.582 12:16:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63502 00:14:20.582 12:16:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:14:20.582 12:16:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63502 00:14:20.582 12:16:29 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 63502 ']' 00:14:20.582 12:16:29 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.582 12:16:29 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.582 12:16:29 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.582 12:16:29 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.582 12:16:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:20.582 [2024-07-10 12:16:29.964745] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:14:20.582 [2024-07-10 12:16:29.964950] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63502 ] 00:14:20.841 [2024-07-10 12:16:30.156809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:21.099 [2024-07-10 12:16:30.456142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.099 [2024-07-10 12:16:30.456347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.099 [2024-07-10 12:16:30.456495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.099 [2024-07-10 12:16:30.456529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.357 12:16:30 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.357 12:16:30 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:14:21.357 12:16:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:14:21.357 12:16:30 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.357 12:16:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:21.357 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:21.357 POWER: Cannot set governor of lcore 0 to userspace 00:14:21.357 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:21.357 POWER: Cannot set governor of lcore 0 to performance 00:14:21.357 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:21.357 POWER: Cannot set governor of lcore 0 to userspace 00:14:21.357 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:21.357 POWER: Cannot set governor of lcore 0 to userspace 00:14:21.357 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:14:21.357 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:14:21.357 POWER: Unable to set Power Management Environment for lcore 0 00:14:21.357 [2024-07-10 12:16:30.778447] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:14:21.357 [2024-07-10 12:16:30.778470] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:14:21.358 [2024-07-10 12:16:30.778487] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:14:21.358 [2024-07-10 12:16:30.778511] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:14:21.358 [2024-07-10 12:16:30.778525] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:14:21.358 [2024-07-10 12:16:30.778536] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:14:21.358 12:16:30 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.358 12:16:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:14:21.358 12:16:30 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.358 12:16:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:21.923 [2024-07-10 12:16:31.213574] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:14:21.923 12:16:31 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.923 12:16:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:14:21.923 12:16:31 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:21.923 12:16:31 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.923 12:16:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:21.923 ************************************ 00:14:21.923 START TEST scheduler_create_thread 00:14:21.923 ************************************ 00:14:21.923 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:14:21.923 12:16:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:14:21.923 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.923 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:21.923 2 00:14:21.923 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.923 12:16:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:14:21.923 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.923 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:21.923 3 00:14:21.923 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.923 12:16:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:14:21.923 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.923 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:21.923 4 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:21.924 5 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:21.924 6 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:21.924 7 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:21.924 8 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:21.924 9 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:21.924 10 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.924 12:16:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:23.360 12:16:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.360 12:16:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:14:23.360 12:16:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:14:23.360 12:16:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.360 12:16:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:24.296 12:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.296 12:16:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:14:24.296 12:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.296 12:16:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:25.231 12:16:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.231 12:16:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:14:25.231 12:16:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:14:25.231 12:16:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.231 12:16:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:25.798 ************************************ 00:14:25.798 END TEST scheduler_create_thread 00:14:25.798 ************************************ 00:14:25.798 12:16:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.798 00:14:25.798 real 0m3.887s 00:14:25.798 user 0m0.028s 00:14:25.798 sys 0m0.006s 00:14:25.798 12:16:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:25.798 12:16:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:25.798 12:16:35 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:14:25.798 12:16:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:14:25.798 12:16:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63502 00:14:25.798 12:16:35 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 63502 ']' 00:14:25.798 12:16:35 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 63502 00:14:25.798 12:16:35 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:14:25.798 12:16:35 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:25.798 12:16:35 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63502 00:14:25.798 killing process with pid 63502 00:14:25.798 12:16:35 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:25.798 12:16:35 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:25.798 12:16:35 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63502' 00:14:25.798 12:16:35 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 63502 00:14:25.798 12:16:35 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 63502 00:14:26.056 [2024-07-10 12:16:35.495166] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:14:27.966 00:14:27.966 real 0m7.287s 00:14:27.966 user 0m13.994s 00:14:27.966 sys 0m0.643s 00:14:27.966 ************************************ 00:14:27.966 END TEST event_scheduler 00:14:27.966 ************************************ 00:14:27.966 12:16:37 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:27.966 12:16:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:27.966 12:16:37 event -- common/autotest_common.sh@1142 -- # return 0 00:14:27.966 12:16:37 event -- event/event.sh@51 -- # modprobe -n nbd 00:14:27.966 12:16:37 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:14:27.966 12:16:37 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:27.966 12:16:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.966 12:16:37 event -- common/autotest_common.sh@10 -- # set +x 00:14:27.966 ************************************ 00:14:27.966 START TEST app_repeat 00:14:27.966 ************************************ 00:14:27.966 12:16:37 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:14:27.966 12:16:37 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:27.966 12:16:37 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:27.966 12:16:37 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:14:27.966 12:16:37 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:27.966 12:16:37 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:14:27.966 12:16:37 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:14:27.966 12:16:37 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:14:27.966 12:16:37 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63626 00:14:27.966 12:16:37 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:14:27.966 12:16:37 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:14:27.966 12:16:37 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63626' 00:14:27.966 Process app_repeat pid: 63626 00:14:27.966 spdk_app_start Round 0 00:14:27.966 12:16:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:14:27.966 12:16:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:14:27.966 12:16:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63626 /var/tmp/spdk-nbd.sock 00:14:27.966 12:16:37 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63626 ']' 00:14:27.966 12:16:37 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:27.966 12:16:37 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.966 12:16:37 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:27.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:27.966 12:16:37 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.966 12:16:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:27.966 [2024-07-10 12:16:37.160084] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:14:27.966 [2024-07-10 12:16:37.160236] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63626 ] 00:14:27.966 [2024-07-10 12:16:37.337835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:28.225 [2024-07-10 12:16:37.640234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.225 [2024-07-10 12:16:37.640281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.792 12:16:38 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.792 12:16:38 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:14:28.792 12:16:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:29.050 Malloc0 00:14:29.050 12:16:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:29.309 Malloc1 00:14:29.309 12:16:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:29.309 12:16:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:29.567 /dev/nbd0 00:14:29.567 12:16:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:29.567 12:16:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:29.567 12:16:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:29.567 12:16:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:14:29.567 12:16:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:29.567 12:16:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:29.567 12:16:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:29.567 12:16:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:14:29.567 12:16:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:29.567 12:16:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:29.567 12:16:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:29.567 1+0 records in 00:14:29.567 1+0 records out 00:14:29.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267412 s, 15.3 MB/s 00:14:29.567 12:16:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:29.567 12:16:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:14:29.567 12:16:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:29.567 12:16:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:29.567 12:16:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:14:29.567 12:16:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.567 12:16:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:29.567 12:16:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:29.826 /dev/nbd1 00:14:29.826 12:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:29.826 12:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:29.826 12:16:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:14:29.826 12:16:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:14:29.826 12:16:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:29.826 12:16:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:29.826 12:16:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:14:29.826 12:16:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:14:29.826 12:16:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:29.826 12:16:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:29.826 12:16:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:29.826 1+0 records in 00:14:29.826 1+0 records out 00:14:29.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581766 s, 7.0 MB/s 00:14:29.826 12:16:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:29.826 12:16:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:14:29.826 12:16:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:29.826 12:16:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:29.826 12:16:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:14:29.826 12:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.826 12:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:29.826 12:16:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:29.826 12:16:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:29.826 12:16:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:30.085 { 00:14:30.085 "nbd_device": "/dev/nbd0", 00:14:30.085 "bdev_name": "Malloc0" 00:14:30.085 }, 00:14:30.085 { 00:14:30.085 "nbd_device": "/dev/nbd1", 00:14:30.085 "bdev_name": "Malloc1" 00:14:30.085 } 00:14:30.085 ]' 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:30.085 { 00:14:30.085 "nbd_device": "/dev/nbd0", 00:14:30.085 "bdev_name": "Malloc0" 00:14:30.085 }, 00:14:30.085 { 00:14:30.085 "nbd_device": "/dev/nbd1", 00:14:30.085 "bdev_name": "Malloc1" 00:14:30.085 } 00:14:30.085 ]' 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:30.085 /dev/nbd1' 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:30.085 /dev/nbd1' 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:30.085 256+0 records in 00:14:30.085 256+0 records out 00:14:30.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131006 s, 80.0 MB/s 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:30.085 12:16:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:30.344 256+0 records in 00:14:30.344 256+0 records out 00:14:30.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279739 s, 37.5 MB/s 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:30.344 256+0 records in 00:14:30.344 256+0 records out 00:14:30.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329391 s, 31.8 MB/s 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.344 12:16:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:30.602 12:16:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:30.602 12:16:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:30.602 12:16:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:30.602 12:16:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.602 12:16:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.602 12:16:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:30.602 12:16:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:30.602 12:16:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.602 12:16:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.603 12:16:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:30.603 12:16:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:30.603 12:16:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:30.603 12:16:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:30.603 12:16:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.603 12:16:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.603 12:16:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:30.861 12:16:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:30.861 12:16:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.861 12:16:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:30.861 12:16:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:30.861 12:16:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:31.119 12:16:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:31.119 12:16:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:31.119 12:16:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:31.119 12:16:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:31.119 12:16:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:31.119 12:16:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:31.119 12:16:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:14:31.119 12:16:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:14:31.119 12:16:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:31.119 12:16:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:14:31.119 12:16:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:31.119 12:16:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:14:31.119 12:16:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:31.376 12:16:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:14:33.279 [2024-07-10 12:16:42.308950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:33.279 [2024-07-10 12:16:42.587514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.279 [2024-07-10 12:16:42.587515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.537 [2024-07-10 12:16:42.853879] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:33.537 [2024-07-10 12:16:42.853991] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:34.470 spdk_app_start Round 1 00:14:34.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:34.470 12:16:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:14:34.470 12:16:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:14:34.471 12:16:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63626 /var/tmp/spdk-nbd.sock 00:14:34.471 12:16:43 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63626 ']' 00:14:34.471 12:16:43 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:34.471 12:16:43 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.471 12:16:43 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:34.471 12:16:43 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.471 12:16:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:34.729 12:16:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.729 12:16:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:14:34.729 12:16:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:34.988 Malloc0 00:14:34.988 12:16:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:35.246 Malloc1 00:14:35.246 12:16:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:35.246 12:16:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:35.246 /dev/nbd0 00:14:35.504 12:16:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:35.504 12:16:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:35.504 1+0 records in 00:14:35.504 1+0 records out 00:14:35.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317041 s, 12.9 MB/s 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:14:35.504 12:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.504 12:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:35.504 12:16:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:35.504 /dev/nbd1 00:14:35.504 12:16:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:35.504 12:16:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:35.504 1+0 records in 00:14:35.504 1+0 records out 00:14:35.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044422 s, 9.2 MB/s 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:14:35.504 12:16:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:35.761 12:16:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:35.761 12:16:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:14:35.761 12:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.761 12:16:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:35.761 12:16:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:35.761 12:16:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:35.762 12:16:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:35.762 12:16:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:35.762 { 00:14:35.762 "nbd_device": "/dev/nbd0", 00:14:35.762 "bdev_name": "Malloc0" 00:14:35.762 }, 00:14:35.762 { 00:14:35.762 "nbd_device": "/dev/nbd1", 00:14:35.762 "bdev_name": "Malloc1" 00:14:35.762 } 00:14:35.762 ]' 00:14:35.762 12:16:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:35.762 { 00:14:35.762 "nbd_device": "/dev/nbd0", 00:14:35.762 "bdev_name": "Malloc0" 00:14:35.762 }, 00:14:35.762 { 00:14:35.762 "nbd_device": "/dev/nbd1", 00:14:35.762 "bdev_name": "Malloc1" 00:14:35.762 } 00:14:35.762 ]' 00:14:35.762 12:16:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:35.762 12:16:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:35.762 /dev/nbd1' 00:14:35.762 12:16:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:35.762 /dev/nbd1' 00:14:35.762 12:16:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:35.762 12:16:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:14:35.762 12:16:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:36.021 256+0 records in 00:14:36.021 256+0 records out 00:14:36.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116396 s, 90.1 MB/s 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:36.021 256+0 records in 00:14:36.021 256+0 records out 00:14:36.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253528 s, 41.4 MB/s 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:36.021 256+0 records in 00:14:36.021 256+0 records out 00:14:36.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0372115 s, 28.2 MB/s 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:36.021 12:16:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:36.022 12:16:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:36.022 12:16:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:36.022 12:16:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:36.022 12:16:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:14:36.022 12:16:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:36.022 12:16:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:14:36.022 12:16:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:36.022 12:16:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:36.022 12:16:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:36.022 12:16:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:36.022 12:16:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.022 12:16:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:14:36.022 12:16:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.022 12:16:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:36.309 12:16:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:36.309 12:16:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:36.309 12:16:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:36.310 12:16:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.582 12:16:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:36.582 12:16:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:36.582 12:16:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:36.582 12:16:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:36.582 12:16:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:36.582 12:16:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:36.582 12:16:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:36.582 12:16:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:36.582 12:16:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:36.582 12:16:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:14:36.582 12:16:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:14:36.582 12:16:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:36.582 12:16:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:14:36.582 12:16:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:36.582 12:16:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:14:36.582 12:16:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:37.146 12:16:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:14:38.521 [2024-07-10 12:16:47.962549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:38.779 [2024-07-10 12:16:48.245355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.779 [2024-07-10 12:16:48.245379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.038 [2024-07-10 12:16:48.513912] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:39.039 [2024-07-10 12:16:48.514029] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:39.973 spdk_app_start Round 2 00:14:39.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:39.973 12:16:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:14:39.973 12:16:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:14:39.973 12:16:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63626 /var/tmp/spdk-nbd.sock 00:14:39.973 12:16:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63626 ']' 00:14:39.973 12:16:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:39.973 12:16:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.973 12:16:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:39.973 12:16:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.973 12:16:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:40.231 12:16:49 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.231 12:16:49 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:14:40.231 12:16:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:40.489 Malloc0 00:14:40.489 12:16:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:40.768 Malloc1 00:14:40.768 12:16:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:40.768 12:16:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.768 12:16:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:40.768 12:16:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:40.768 12:16:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:40.769 12:16:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:40.769 12:16:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:40.769 12:16:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.769 12:16:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:40.769 12:16:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:40.769 12:16:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:40.769 12:16:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:40.769 12:16:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:14:40.769 12:16:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:40.769 12:16:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:40.769 12:16:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:41.027 /dev/nbd0 00:14:41.027 12:16:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:41.027 12:16:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:41.027 12:16:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:41.027 12:16:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:14:41.027 12:16:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:41.027 12:16:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:41.027 12:16:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:41.027 12:16:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:14:41.027 12:16:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:41.027 12:16:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:41.027 12:16:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:41.027 1+0 records in 00:14:41.027 1+0 records out 00:14:41.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00241688 s, 1.7 MB/s 00:14:41.027 12:16:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:41.027 12:16:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:14:41.027 12:16:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:41.027 12:16:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:41.027 12:16:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:14:41.027 12:16:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.027 12:16:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:41.027 12:16:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:41.286 /dev/nbd1 00:14:41.286 12:16:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:41.286 12:16:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:41.286 12:16:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:14:41.286 12:16:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:14:41.286 12:16:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:41.286 12:16:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:41.286 12:16:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:14:41.286 12:16:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:14:41.286 12:16:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:41.286 12:16:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:41.286 12:16:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:41.286 1+0 records in 00:14:41.286 1+0 records out 00:14:41.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407082 s, 10.1 MB/s 00:14:41.286 12:16:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:41.286 12:16:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:14:41.286 12:16:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:41.286 12:16:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:41.286 12:16:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:14:41.286 12:16:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.286 12:16:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:41.286 12:16:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:41.286 12:16:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:41.286 12:16:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:41.545 { 00:14:41.545 "nbd_device": "/dev/nbd0", 00:14:41.545 "bdev_name": "Malloc0" 00:14:41.545 }, 00:14:41.545 { 00:14:41.545 "nbd_device": "/dev/nbd1", 00:14:41.545 "bdev_name": "Malloc1" 00:14:41.545 } 00:14:41.545 ]' 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:41.545 { 00:14:41.545 "nbd_device": "/dev/nbd0", 00:14:41.545 "bdev_name": "Malloc0" 00:14:41.545 }, 00:14:41.545 { 00:14:41.545 "nbd_device": "/dev/nbd1", 00:14:41.545 "bdev_name": "Malloc1" 00:14:41.545 } 00:14:41.545 ]' 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:41.545 /dev/nbd1' 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:41.545 /dev/nbd1' 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:41.545 256+0 records in 00:14:41.545 256+0 records out 00:14:41.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117132 s, 89.5 MB/s 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:41.545 256+0 records in 00:14:41.545 256+0 records out 00:14:41.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283284 s, 37.0 MB/s 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:41.545 12:16:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:41.804 256+0 records in 00:14:41.804 256+0 records out 00:14:41.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0319157 s, 32.9 MB/s 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:41.804 12:16:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.805 12:16:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.805 12:16:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:42.064 12:16:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:42.064 12:16:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:42.064 12:16:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:42.064 12:16:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:42.064 12:16:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:42.064 12:16:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:42.064 12:16:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:42.064 12:16:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:42.064 12:16:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:42.064 12:16:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:42.064 12:16:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:42.323 12:16:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:42.323 12:16:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:42.323 12:16:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:42.323 12:16:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:42.323 12:16:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:42.323 12:16:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:42.323 12:16:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:14:42.323 12:16:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:14:42.323 12:16:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:42.323 12:16:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:14:42.323 12:16:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:42.323 12:16:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:14:42.323 12:16:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:42.891 12:16:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:14:44.268 [2024-07-10 12:16:53.677793] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:44.528 [2024-07-10 12:16:53.966093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.528 [2024-07-10 12:16:53.966094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.787 [2024-07-10 12:16:54.241180] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:44.787 [2024-07-10 12:16:54.241255] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:45.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:45.723 12:16:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63626 /var/tmp/spdk-nbd.sock 00:14:45.723 12:16:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63626 ']' 00:14:45.723 12:16:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:45.723 12:16:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:45.723 12:16:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:45.723 12:16:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:45.723 12:16:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:45.982 12:16:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.982 12:16:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:14:45.982 12:16:55 event.app_repeat -- event/event.sh@39 -- # killprocess 63626 00:14:45.982 12:16:55 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 63626 ']' 00:14:45.983 12:16:55 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 63626 00:14:45.983 12:16:55 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:14:45.983 12:16:55 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:45.983 12:16:55 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63626 00:14:45.983 12:16:55 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:45.983 12:16:55 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:45.983 12:16:55 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63626' 00:14:45.983 killing process with pid 63626 00:14:45.983 12:16:55 event.app_repeat -- common/autotest_common.sh@967 -- # kill 63626 00:14:45.983 12:16:55 event.app_repeat -- common/autotest_common.sh@972 -- # wait 63626 00:14:47.360 spdk_app_start is called in Round 0. 00:14:47.360 Shutdown signal received, stop current app iteration 00:14:47.360 Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 reinitialization... 00:14:47.360 spdk_app_start is called in Round 1. 00:14:47.360 Shutdown signal received, stop current app iteration 00:14:47.360 Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 reinitialization... 00:14:47.360 spdk_app_start is called in Round 2. 00:14:47.360 Shutdown signal received, stop current app iteration 00:14:47.360 Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 reinitialization... 00:14:47.360 spdk_app_start is called in Round 3. 00:14:47.360 Shutdown signal received, stop current app iteration 00:14:47.360 12:16:56 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:14:47.360 12:16:56 event.app_repeat -- event/event.sh@42 -- # return 0 00:14:47.360 00:14:47.360 real 0m19.692s 00:14:47.360 user 0m39.452s 00:14:47.360 sys 0m3.423s 00:14:47.360 12:16:56 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:47.360 12:16:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:47.360 ************************************ 00:14:47.360 END TEST app_repeat 00:14:47.360 ************************************ 00:14:47.360 12:16:56 event -- common/autotest_common.sh@1142 -- # return 0 00:14:47.361 12:16:56 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:14:47.361 12:16:56 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:14:47.361 12:16:56 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:47.361 12:16:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.361 12:16:56 event -- common/autotest_common.sh@10 -- # set +x 00:14:47.619 ************************************ 00:14:47.619 START TEST cpu_locks 00:14:47.619 ************************************ 00:14:47.619 12:16:56 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:14:47.619 * Looking for test storage... 00:14:47.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:14:47.619 12:16:56 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:14:47.619 12:16:56 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:14:47.619 12:16:56 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:14:47.619 12:16:56 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:14:47.619 12:16:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:47.619 12:16:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.619 12:16:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:47.619 ************************************ 00:14:47.619 START TEST default_locks 00:14:47.619 ************************************ 00:14:47.619 12:16:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:14:47.619 12:16:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=64066 00:14:47.619 12:16:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:47.619 12:16:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 64066 00:14:47.619 12:16:56 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 64066 ']' 00:14:47.619 12:16:56 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.619 12:16:56 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.619 12:16:56 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.619 12:16:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.619 12:16:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:14:47.619 [2024-07-10 12:16:57.076915] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:14:47.619 [2024-07-10 12:16:57.077054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64066 ] 00:14:47.878 [2024-07-10 12:16:57.248844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.137 [2024-07-10 12:16:57.535310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.515 12:16:58 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.515 12:16:58 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:14:49.515 12:16:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 64066 00:14:49.515 12:16:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 64066 00:14:49.515 12:16:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:49.775 12:16:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 64066 00:14:49.775 12:16:59 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 64066 ']' 00:14:49.775 12:16:59 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 64066 00:14:49.775 12:16:59 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:14:49.775 12:16:59 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:49.775 12:16:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64066 00:14:49.775 killing process with pid 64066 00:14:49.775 12:16:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:49.775 12:16:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:49.775 12:16:59 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64066' 00:14:49.775 12:16:59 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 64066 00:14:49.775 12:16:59 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 64066 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 64066 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64066 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:14:53.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.121 ERROR: process (pid: 64066) is no longer running 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 64066 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 64066 ']' 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:14:53.121 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64066) - No such process 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:14:53.121 00:14:53.121 real 0m4.925s 00:14:53.121 user 0m4.670s 00:14:53.121 sys 0m0.843s 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:53.121 ************************************ 00:14:53.121 END TEST default_locks 00:14:53.121 ************************************ 00:14:53.121 12:17:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:14:53.121 12:17:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:14:53.121 12:17:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:14:53.122 12:17:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:53.122 12:17:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.122 12:17:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:53.122 ************************************ 00:14:53.122 START TEST default_locks_via_rpc 00:14:53.122 ************************************ 00:14:53.122 12:17:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:14:53.122 12:17:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=64152 00:14:53.122 12:17:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:53.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.122 12:17:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 64152 00:14:53.122 12:17:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64152 ']' 00:14:53.122 12:17:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.122 12:17:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.122 12:17:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.122 12:17:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.122 12:17:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.122 [2024-07-10 12:17:02.079669] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:14:53.122 [2024-07-10 12:17:02.079843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64152 ] 00:14:53.122 [2024-07-10 12:17:02.256083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.122 [2024-07-10 12:17:02.547872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 64152 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 64152 00:14:54.498 12:17:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:54.757 12:17:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 64152 00:14:54.757 12:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 64152 ']' 00:14:54.757 12:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 64152 00:14:54.757 12:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:14:54.757 12:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.757 12:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64152 00:14:54.757 killing process with pid 64152 00:14:54.757 12:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:54.757 12:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:54.757 12:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64152' 00:14:54.757 12:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 64152 00:14:54.757 12:17:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 64152 00:14:58.059 ************************************ 00:14:58.059 END TEST default_locks_via_rpc 00:14:58.059 ************************************ 00:14:58.059 00:14:58.059 real 0m5.029s 00:14:58.059 user 0m4.811s 00:14:58.059 sys 0m0.865s 00:14:58.059 12:17:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.059 12:17:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.059 12:17:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:14:58.059 12:17:07 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:14:58.059 12:17:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:58.059 12:17:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.059 12:17:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:58.059 ************************************ 00:14:58.059 START TEST non_locking_app_on_locked_coremask 00:14:58.059 ************************************ 00:14:58.059 12:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:14:58.059 12:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64237 00:14:58.059 12:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64237 /var/tmp/spdk.sock 00:14:58.059 12:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:58.059 12:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64237 ']' 00:14:58.059 12:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.059 12:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.059 12:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.059 12:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.059 12:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:58.059 [2024-07-10 12:17:07.176739] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:14:58.059 [2024-07-10 12:17:07.176892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64237 ] 00:14:58.059 [2024-07-10 12:17:07.350220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.318 [2024-07-10 12:17:07.629449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.252 12:17:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.252 12:17:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:14:59.252 12:17:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64253 00:14:59.252 12:17:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64253 /var/tmp/spdk2.sock 00:14:59.252 12:17:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:14:59.252 12:17:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64253 ']' 00:14:59.252 12:17:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:59.252 12:17:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.252 12:17:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:59.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:59.252 12:17:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.252 12:17:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:59.511 [2024-07-10 12:17:08.777958] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:14:59.511 [2024-07-10 12:17:08.778296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64253 ] 00:14:59.511 [2024-07-10 12:17:08.945618] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:59.511 [2024-07-10 12:17:08.945688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.079 [2024-07-10 12:17:09.444668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.980 12:17:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.980 12:17:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:15:01.980 12:17:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64237 00:15:01.980 12:17:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64237 00:15:01.980 12:17:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:02.962 12:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64237 00:15:02.962 12:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64237 ']' 00:15:02.962 12:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64237 00:15:02.962 12:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:15:02.962 12:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:02.962 12:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64237 00:15:02.962 killing process with pid 64237 00:15:02.962 12:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:02.962 12:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:02.962 12:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64237' 00:15:02.962 12:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64237 00:15:02.962 12:17:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64237 00:15:08.232 12:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64253 00:15:08.232 12:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64253 ']' 00:15:08.232 12:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64253 00:15:08.232 12:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:15:08.232 12:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:08.232 12:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64253 00:15:08.232 killing process with pid 64253 00:15:08.232 12:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:08.232 12:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:08.232 12:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64253' 00:15:08.232 12:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64253 00:15:08.232 12:17:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64253 00:15:10.761 ************************************ 00:15:10.761 END TEST non_locking_app_on_locked_coremask 00:15:10.761 ************************************ 00:15:10.761 00:15:10.761 real 0m13.115s 00:15:10.761 user 0m13.141s 00:15:10.761 sys 0m1.561s 00:15:10.761 12:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:10.761 12:17:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:11.020 12:17:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:15:11.020 12:17:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:15:11.020 12:17:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:11.020 12:17:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:11.020 12:17:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:11.020 ************************************ 00:15:11.020 START TEST locking_app_on_unlocked_coremask 00:15:11.020 ************************************ 00:15:11.020 12:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:15:11.020 12:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64418 00:15:11.020 12:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:15:11.020 12:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64418 /var/tmp/spdk.sock 00:15:11.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.020 12:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64418 ']' 00:15:11.020 12:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.020 12:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.020 12:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.020 12:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.020 12:17:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:11.020 [2024-07-10 12:17:20.366110] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:11.020 [2024-07-10 12:17:20.366273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64418 ] 00:15:11.278 [2024-07-10 12:17:20.538407] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:11.278 [2024-07-10 12:17:20.538479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.536 [2024-07-10 12:17:20.779925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.471 12:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.471 12:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:15:12.471 12:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64434 00:15:12.471 12:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64434 /var/tmp/spdk2.sock 00:15:12.471 12:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:15:12.471 12:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64434 ']' 00:15:12.471 12:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:12.471 12:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.471 12:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:12.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:12.471 12:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.471 12:17:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:12.471 [2024-07-10 12:17:21.801222] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:12.471 [2024-07-10 12:17:21.801571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64434 ] 00:15:12.728 [2024-07-10 12:17:21.971001] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.325 [2024-07-10 12:17:22.555239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.225 12:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:15.225 12:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:15:15.225 12:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64434 00:15:15.225 12:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64434 00:15:15.225 12:17:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:16.167 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64418 00:15:16.167 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64418 ']' 00:15:16.167 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64418 00:15:16.167 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:15:16.167 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:16.167 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64418 00:15:16.167 killing process with pid 64418 00:15:16.167 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:16.167 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:16.167 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64418' 00:15:16.167 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64418 00:15:16.167 12:17:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64418 00:15:21.515 12:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64434 00:15:21.515 12:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64434 ']' 00:15:21.515 12:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64434 00:15:21.515 12:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:15:21.515 12:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:21.515 12:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64434 00:15:21.515 12:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:21.515 killing process with pid 64434 00:15:21.515 12:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:21.515 12:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64434' 00:15:21.515 12:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64434 00:15:21.515 12:17:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64434 00:15:24.049 00:15:24.050 real 0m13.187s 00:15:24.050 user 0m13.255s 00:15:24.050 sys 0m1.610s 00:15:24.050 12:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:24.050 12:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:24.050 ************************************ 00:15:24.050 END TEST locking_app_on_unlocked_coremask 00:15:24.050 ************************************ 00:15:24.050 12:17:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:15:24.050 12:17:33 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:15:24.050 12:17:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:24.050 12:17:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.050 12:17:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:24.050 ************************************ 00:15:24.050 START TEST locking_app_on_locked_coremask 00:15:24.050 ************************************ 00:15:24.050 12:17:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:15:24.050 12:17:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64595 00:15:24.050 12:17:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64595 /var/tmp/spdk.sock 00:15:24.050 12:17:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:24.050 12:17:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64595 ']' 00:15:24.050 12:17:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.050 12:17:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:24.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.308 12:17:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.308 12:17:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:24.309 12:17:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:24.309 [2024-07-10 12:17:33.634500] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:24.309 [2024-07-10 12:17:33.634650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64595 ] 00:15:24.567 [2024-07-10 12:17:33.805498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.826 [2024-07-10 12:17:34.097149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.761 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.761 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:15:25.761 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:15:25.761 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64622 00:15:25.761 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64622 /var/tmp/spdk2.sock 00:15:25.761 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:15:25.761 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64622 /var/tmp/spdk2.sock 00:15:25.761 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:15:25.761 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.761 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:15:25.761 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.762 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64622 /var/tmp/spdk2.sock 00:15:25.762 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64622 ']' 00:15:25.762 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:25.762 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.762 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:25.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:25.762 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.762 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:25.762 [2024-07-10 12:17:35.232029] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:25.762 [2024-07-10 12:17:35.232410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64622 ] 00:15:26.020 [2024-07-10 12:17:35.400809] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64595 has claimed it. 00:15:26.020 [2024-07-10 12:17:35.400891] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:15:26.588 ERROR: process (pid: 64622) is no longer running 00:15:26.588 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64622) - No such process 00:15:26.588 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:26.588 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:15:26.588 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:15:26.588 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:26.588 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:26.588 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:26.588 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64595 00:15:26.588 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64595 00:15:26.588 12:17:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:26.847 12:17:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64595 00:15:26.847 12:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64595 ']' 00:15:26.847 12:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64595 00:15:26.847 12:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:15:27.105 12:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:27.105 12:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64595 00:15:27.105 12:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:27.105 12:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:27.105 killing process with pid 64595 00:15:27.105 12:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64595' 00:15:27.105 12:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64595 00:15:27.105 12:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64595 00:15:30.392 00:15:30.392 real 0m5.640s 00:15:30.392 user 0m5.611s 00:15:30.392 sys 0m0.992s 00:15:30.392 12:17:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:30.392 ************************************ 00:15:30.392 END TEST locking_app_on_locked_coremask 00:15:30.392 ************************************ 00:15:30.392 12:17:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:30.392 12:17:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:15:30.392 12:17:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:15:30.392 12:17:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:30.392 12:17:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.392 12:17:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:30.392 ************************************ 00:15:30.392 START TEST locking_overlapped_coremask 00:15:30.392 ************************************ 00:15:30.392 12:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:15:30.392 12:17:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64697 00:15:30.392 12:17:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:30.392 12:17:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64697 /var/tmp/spdk.sock 00:15:30.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.392 12:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64697 ']' 00:15:30.392 12:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.392 12:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.392 12:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.392 12:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.392 12:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:30.392 [2024-07-10 12:17:39.349868] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:30.392 [2024-07-10 12:17:39.350030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64697 ] 00:15:30.392 [2024-07-10 12:17:39.528380] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:30.392 [2024-07-10 12:17:39.822511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.392 [2024-07-10 12:17:39.822680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.392 [2024-07-10 12:17:39.822708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64721 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64721 /var/tmp/spdk2.sock 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64721 /var/tmp/spdk2.sock 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64721 /var/tmp/spdk2.sock 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64721 ']' 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:31.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.768 12:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:31.768 [2024-07-10 12:17:41.023572] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:31.768 [2024-07-10 12:17:41.023983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64721 ] 00:15:31.768 [2024-07-10 12:17:41.195178] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64697 has claimed it. 00:15:31.768 [2024-07-10 12:17:41.195255] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:15:32.336 ERROR: process (pid: 64721) is no longer running 00:15:32.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64721) - No such process 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64697 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 64697 ']' 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 64697 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64697 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64697' 00:15:32.336 killing process with pid 64697 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 64697 00:15:32.336 12:17:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 64697 00:15:35.704 00:15:35.704 real 0m5.210s 00:15:35.704 user 0m13.178s 00:15:35.704 sys 0m0.864s 00:15:35.704 12:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:35.704 ************************************ 00:15:35.704 12:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:35.704 END TEST locking_overlapped_coremask 00:15:35.704 ************************************ 00:15:35.704 12:17:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:15:35.704 12:17:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:15:35.704 12:17:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:35.704 12:17:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:35.704 12:17:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:35.704 ************************************ 00:15:35.704 START TEST locking_overlapped_coremask_via_rpc 00:15:35.704 ************************************ 00:15:35.704 12:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:15:35.704 12:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64789 00:15:35.704 12:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64789 /var/tmp/spdk.sock 00:15:35.704 12:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:15:35.704 12:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64789 ']' 00:15:35.704 12:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.704 12:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.704 12:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.704 12:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.704 12:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:35.704 [2024-07-10 12:17:44.644287] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:35.704 [2024-07-10 12:17:44.644937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64789 ] 00:15:35.704 [2024-07-10 12:17:44.819520] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:35.704 [2024-07-10 12:17:44.819616] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:35.704 [2024-07-10 12:17:45.116244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.704 [2024-07-10 12:17:45.116348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.704 [2024-07-10 12:17:45.116380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:37.082 12:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:37.082 12:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:37.082 12:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:15:37.082 12:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64814 00:15:37.082 12:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64814 /var/tmp/spdk2.sock 00:15:37.082 12:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64814 ']' 00:15:37.082 12:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:37.082 12:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.082 12:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:37.082 12:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.082 12:17:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.082 [2024-07-10 12:17:46.269052] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:37.082 [2024-07-10 12:17:46.269406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64814 ] 00:15:37.082 [2024-07-10 12:17:46.440818] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:37.082 [2024-07-10 12:17:46.440901] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:37.715 [2024-07-10 12:17:46.922999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.715 [2024-07-10 12:17:46.923044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.715 [2024-07-10 12:17:46.923089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:39.621 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:39.622 [2024-07-10 12:17:48.851980] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64789 has claimed it. 00:15:39.622 request: 00:15:39.622 { 00:15:39.622 "method": "framework_enable_cpumask_locks", 00:15:39.622 "req_id": 1 00:15:39.622 } 00:15:39.622 Got JSON-RPC error response 00:15:39.622 response: 00:15:39.622 { 00:15:39.622 "code": -32603, 00:15:39.622 "message": "Failed to claim CPU core: 2" 00:15:39.622 } 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:39.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64789 /var/tmp/spdk.sock 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64789 ']' 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.622 12:17:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:39.622 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.622 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:39.622 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64814 /var/tmp/spdk2.sock 00:15:39.622 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64814 ']' 00:15:39.622 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:39.622 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.622 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:39.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:39.622 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.622 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:39.880 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.880 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:39.880 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:15:39.880 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:39.880 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:39.880 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:39.880 00:15:39.880 real 0m4.741s 00:15:39.880 user 0m1.190s 00:15:39.880 sys 0m0.247s 00:15:39.880 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:39.880 12:17:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:39.880 ************************************ 00:15:39.880 END TEST locking_overlapped_coremask_via_rpc 00:15:39.880 ************************************ 00:15:39.880 12:17:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:15:39.880 12:17:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:15:39.880 12:17:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64789 ]] 00:15:39.880 12:17:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64789 00:15:39.880 12:17:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64789 ']' 00:15:39.880 12:17:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64789 00:15:39.880 12:17:49 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:15:39.880 12:17:49 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:39.880 12:17:49 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64789 00:15:40.148 12:17:49 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:40.148 killing process with pid 64789 00:15:40.148 12:17:49 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:40.148 12:17:49 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64789' 00:15:40.148 12:17:49 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64789 00:15:40.148 12:17:49 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64789 00:15:43.439 12:17:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64814 ]] 00:15:43.439 12:17:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64814 00:15:43.439 12:17:52 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64814 ']' 00:15:43.439 12:17:52 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64814 00:15:43.439 12:17:52 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:15:43.439 12:17:52 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.439 12:17:52 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64814 00:15:43.439 killing process with pid 64814 00:15:43.439 12:17:52 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:43.439 12:17:52 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:43.439 12:17:52 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64814' 00:15:43.439 12:17:52 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64814 00:15:43.439 12:17:52 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64814 00:15:45.396 12:17:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:15:45.396 12:17:54 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:15:45.396 12:17:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64789 ]] 00:15:45.396 12:17:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64789 00:15:45.396 12:17:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64789 ']' 00:15:45.396 12:17:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64789 00:15:45.396 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64789) - No such process 00:15:45.396 Process with pid 64789 is not found 00:15:45.396 12:17:54 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64789 is not found' 00:15:45.396 12:17:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64814 ]] 00:15:45.396 Process with pid 64814 is not found 00:15:45.396 12:17:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64814 00:15:45.396 12:17:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64814 ']' 00:15:45.396 12:17:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64814 00:15:45.396 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64814) - No such process 00:15:45.396 12:17:54 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64814 is not found' 00:15:45.396 12:17:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:15:45.396 00:15:45.396 real 0m57.891s 00:15:45.396 user 1m33.372s 00:15:45.396 sys 0m8.405s 00:15:45.396 12:17:54 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.396 12:17:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:45.396 ************************************ 00:15:45.396 END TEST cpu_locks 00:15:45.396 ************************************ 00:15:45.396 12:17:54 event -- common/autotest_common.sh@1142 -- # return 0 00:15:45.396 ************************************ 00:15:45.396 END TEST event 00:15:45.396 ************************************ 00:15:45.396 00:15:45.396 real 1m31.612s 00:15:45.396 user 2m35.359s 00:15:45.396 sys 0m13.308s 00:15:45.396 12:17:54 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.396 12:17:54 event -- common/autotest_common.sh@10 -- # set +x 00:15:45.396 12:17:54 -- common/autotest_common.sh@1142 -- # return 0 00:15:45.396 12:17:54 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:15:45.396 12:17:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:45.396 12:17:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.396 12:17:54 -- common/autotest_common.sh@10 -- # set +x 00:15:45.396 ************************************ 00:15:45.396 START TEST thread 00:15:45.396 ************************************ 00:15:45.396 12:17:54 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:15:45.655 * Looking for test storage... 00:15:45.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:15:45.655 12:17:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:45.655 12:17:54 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:15:45.655 12:17:54 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.655 12:17:54 thread -- common/autotest_common.sh@10 -- # set +x 00:15:45.655 ************************************ 00:15:45.655 START TEST thread_poller_perf 00:15:45.655 ************************************ 00:15:45.655 12:17:54 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:45.655 [2024-07-10 12:17:55.043072] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:45.655 [2024-07-10 12:17:55.043205] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65001 ] 00:15:45.913 [2024-07-10 12:17:55.217618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.171 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:15:46.171 [2024-07-10 12:17:55.473844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.570 ====================================== 00:15:47.570 busy:2500744130 (cyc) 00:15:47.570 total_run_count: 387000 00:15:47.570 tsc_hz: 2490000000 (cyc) 00:15:47.570 ====================================== 00:15:47.570 poller_cost: 6461 (cyc), 2594 (nsec) 00:15:47.570 ************************************ 00:15:47.570 END TEST thread_poller_perf 00:15:47.570 ************************************ 00:15:47.570 00:15:47.570 real 0m1.935s 00:15:47.570 user 0m1.685s 00:15:47.570 sys 0m0.138s 00:15:47.570 12:17:56 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:47.570 12:17:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:15:47.570 12:17:56 thread -- common/autotest_common.sh@1142 -- # return 0 00:15:47.570 12:17:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:47.570 12:17:56 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:15:47.570 12:17:56 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:47.570 12:17:56 thread -- common/autotest_common.sh@10 -- # set +x 00:15:47.570 ************************************ 00:15:47.570 START TEST thread_poller_perf 00:15:47.570 ************************************ 00:15:47.570 12:17:57 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:47.828 [2024-07-10 12:17:57.054595] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:47.828 [2024-07-10 12:17:57.055248] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65043 ] 00:15:47.828 [2024-07-10 12:17:57.226922] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.085 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:15:48.085 [2024-07-10 12:17:57.475818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.541 ====================================== 00:15:49.541 busy:2493935212 (cyc) 00:15:49.541 total_run_count: 5107000 00:15:49.541 tsc_hz: 2490000000 (cyc) 00:15:49.541 ====================================== 00:15:49.541 poller_cost: 488 (cyc), 195 (nsec) 00:15:49.541 ************************************ 00:15:49.541 END TEST thread_poller_perf 00:15:49.541 ************************************ 00:15:49.541 00:15:49.541 real 0m1.918s 00:15:49.541 user 0m1.680s 00:15:49.541 sys 0m0.126s 00:15:49.541 12:17:58 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:49.541 12:17:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:15:49.541 12:17:58 thread -- common/autotest_common.sh@1142 -- # return 0 00:15:49.541 12:17:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:15:49.541 00:15:49.541 real 0m4.123s 00:15:49.541 user 0m3.457s 00:15:49.541 sys 0m0.443s 00:15:49.541 ************************************ 00:15:49.541 END TEST thread 00:15:49.541 ************************************ 00:15:49.541 12:17:58 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:49.541 12:17:58 thread -- common/autotest_common.sh@10 -- # set +x 00:15:49.800 12:17:59 -- common/autotest_common.sh@1142 -- # return 0 00:15:49.800 12:17:59 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:15:49.800 12:17:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:49.800 12:17:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.800 12:17:59 -- common/autotest_common.sh@10 -- # set +x 00:15:49.800 ************************************ 00:15:49.800 START TEST accel 00:15:49.800 ************************************ 00:15:49.800 12:17:59 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:15:49.800 * Looking for test storage... 00:15:49.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:15:49.800 12:17:59 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:15:49.800 12:17:59 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:15:49.800 12:17:59 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:49.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.800 12:17:59 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=65124 00:15:49.800 12:17:59 accel -- accel/accel.sh@63 -- # waitforlisten 65124 00:15:49.800 12:17:59 accel -- common/autotest_common.sh@829 -- # '[' -z 65124 ']' 00:15:49.800 12:17:59 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.800 12:17:59 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.800 12:17:59 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.800 12:17:59 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.800 12:17:59 accel -- common/autotest_common.sh@10 -- # set +x 00:15:49.800 12:17:59 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:15:49.800 12:17:59 accel -- accel/accel.sh@61 -- # build_accel_config 00:15:49.800 12:17:59 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:49.800 12:17:59 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:49.800 12:17:59 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:49.800 12:17:59 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:49.800 12:17:59 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:49.800 12:17:59 accel -- accel/accel.sh@40 -- # local IFS=, 00:15:49.800 12:17:59 accel -- accel/accel.sh@41 -- # jq -r . 00:15:50.059 [2024-07-10 12:17:59.292418] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:50.059 [2024-07-10 12:17:59.292569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65124 ] 00:15:50.059 [2024-07-10 12:17:59.467450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.429 [2024-07-10 12:17:59.705808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.380 12:18:00 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.380 12:18:00 accel -- common/autotest_common.sh@862 -- # return 0 00:15:51.380 12:18:00 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:15:51.380 12:18:00 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:15:51.380 12:18:00 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:15:51.381 12:18:00 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:15:51.381 12:18:00 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:15:51.381 12:18:00 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:15:51.381 12:18:00 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.381 12:18:00 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:15:51.381 12:18:00 accel -- common/autotest_common.sh@10 -- # set +x 00:15:51.381 12:18:00 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # IFS== 00:15:51.381 12:18:00 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:51.381 12:18:00 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:51.381 12:18:00 accel -- accel/accel.sh@75 -- # killprocess 65124 00:15:51.381 12:18:00 accel -- common/autotest_common.sh@948 -- # '[' -z 65124 ']' 00:15:51.381 12:18:00 accel -- common/autotest_common.sh@952 -- # kill -0 65124 00:15:51.381 12:18:00 accel -- common/autotest_common.sh@953 -- # uname 00:15:51.381 12:18:00 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:51.381 12:18:00 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65124 00:15:51.381 killing process with pid 65124 00:15:51.381 12:18:00 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:51.381 12:18:00 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:51.381 12:18:00 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65124' 00:15:51.381 12:18:00 accel -- common/autotest_common.sh@967 -- # kill 65124 00:15:51.381 12:18:00 accel -- common/autotest_common.sh@972 -- # wait 65124 00:15:53.912 12:18:03 accel -- accel/accel.sh@76 -- # trap - ERR 00:15:53.912 12:18:03 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:15:53.912 12:18:03 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:53.912 12:18:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.912 12:18:03 accel -- common/autotest_common.sh@10 -- # set +x 00:15:53.912 12:18:03 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:15:53.912 12:18:03 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:15:53.912 12:18:03 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:15:53.912 12:18:03 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:53.912 12:18:03 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:53.912 12:18:03 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:53.912 12:18:03 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:53.912 12:18:03 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:53.912 12:18:03 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:15:53.912 12:18:03 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:15:53.912 12:18:03 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.912 12:18:03 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:15:53.912 12:18:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:15:53.912 12:18:03 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:15:53.912 12:18:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:15:53.912 12:18:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.912 12:18:03 accel -- common/autotest_common.sh@10 -- # set +x 00:15:53.912 ************************************ 00:15:53.912 START TEST accel_missing_filename 00:15:53.912 ************************************ 00:15:53.912 12:18:03 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:15:53.912 12:18:03 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:15:53.912 12:18:03 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:15:53.912 12:18:03 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:15:53.912 12:18:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:53.912 12:18:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:15:53.912 12:18:03 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:53.912 12:18:03 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:15:53.912 12:18:03 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:15:53.912 12:18:03 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:15:53.912 12:18:03 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:53.912 12:18:03 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:53.912 12:18:03 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:53.912 12:18:03 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:53.912 12:18:03 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:53.912 12:18:03 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:15:53.912 12:18:03 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:15:54.170 [2024-07-10 12:18:03.413539] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:54.170 [2024-07-10 12:18:03.413674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65205 ] 00:15:54.170 [2024-07-10 12:18:03.584151] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.427 [2024-07-10 12:18:03.835923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.686 [2024-07-10 12:18:04.084793] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:55.253 [2024-07-10 12:18:04.645988] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:15:55.820 A filename is required. 00:15:55.820 12:18:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:15:55.820 12:18:05 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:55.820 12:18:05 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:15:55.820 12:18:05 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:15:55.820 12:18:05 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:15:55.820 12:18:05 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:55.820 00:15:55.820 real 0m1.762s 00:15:55.820 user 0m1.498s 00:15:55.820 sys 0m0.201s 00:15:55.820 12:18:05 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:55.820 ************************************ 00:15:55.820 12:18:05 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:15:55.820 END TEST accel_missing_filename 00:15:55.820 ************************************ 00:15:55.820 12:18:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:15:55.820 12:18:05 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:15:55.820 12:18:05 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:15:55.820 12:18:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.820 12:18:05 accel -- common/autotest_common.sh@10 -- # set +x 00:15:55.820 ************************************ 00:15:55.820 START TEST accel_compress_verify 00:15:55.820 ************************************ 00:15:55.820 12:18:05 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:15:55.820 12:18:05 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:15:55.820 12:18:05 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:15:55.820 12:18:05 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:15:55.820 12:18:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:55.820 12:18:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:15:55.820 12:18:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:55.820 12:18:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:15:55.820 12:18:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:15:55.820 12:18:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:15:55.820 12:18:05 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:55.820 12:18:05 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:55.820 12:18:05 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:55.820 12:18:05 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:55.820 12:18:05 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:55.820 12:18:05 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:15:55.820 12:18:05 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:15:55.820 [2024-07-10 12:18:05.250612] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:55.820 [2024-07-10 12:18:05.250753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65247 ] 00:15:56.078 [2024-07-10 12:18:05.425995] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.337 [2024-07-10 12:18:05.670762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.595 [2024-07-10 12:18:05.914714] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:57.162 [2024-07-10 12:18:06.482365] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:15:57.730 00:15:57.730 Compression does not support the verify option, aborting. 00:15:57.730 12:18:06 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:15:57.730 12:18:06 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:57.730 12:18:06 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:15:57.730 12:18:06 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:15:57.730 12:18:06 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:15:57.730 12:18:06 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:57.730 00:15:57.730 real 0m1.754s 00:15:57.730 user 0m1.478s 00:15:57.730 sys 0m0.207s 00:15:57.730 12:18:06 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.730 12:18:06 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:15:57.730 ************************************ 00:15:57.730 END TEST accel_compress_verify 00:15:57.730 ************************************ 00:15:57.730 12:18:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:15:57.730 12:18:06 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:15:57.730 12:18:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:15:57.730 12:18:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.730 12:18:06 accel -- common/autotest_common.sh@10 -- # set +x 00:15:57.730 ************************************ 00:15:57.730 START TEST accel_wrong_workload 00:15:57.730 ************************************ 00:15:57.730 12:18:07 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:15:57.730 12:18:07 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:15:57.730 12:18:07 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:15:57.730 12:18:07 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:15:57.730 12:18:07 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:57.730 12:18:07 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:15:57.730 12:18:07 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:57.730 12:18:07 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:15:57.730 12:18:07 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:15:57.730 12:18:07 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:15:57.730 12:18:07 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:57.730 12:18:07 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:57.730 12:18:07 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:57.730 12:18:07 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:57.730 12:18:07 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:57.730 12:18:07 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:15:57.730 12:18:07 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:15:57.730 Unsupported workload type: foobar 00:15:57.730 [2024-07-10 12:18:07.073976] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:15:57.730 accel_perf options: 00:15:57.730 [-h help message] 00:15:57.730 [-q queue depth per core] 00:15:57.730 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:15:57.730 [-T number of threads per core 00:15:57.730 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:15:57.730 [-t time in seconds] 00:15:57.730 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:15:57.730 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:15:57.730 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:15:57.730 [-l for compress/decompress workloads, name of uncompressed input file 00:15:57.730 [-S for crc32c workload, use this seed value (default 0) 00:15:57.730 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:15:57.730 [-f for fill workload, use this BYTE value (default 255) 00:15:57.730 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:15:57.730 [-y verify result if this switch is on] 00:15:57.730 [-a tasks to allocate per core (default: same value as -q)] 00:15:57.730 Can be used to spread operations across a wider range of memory. 00:15:57.730 ************************************ 00:15:57.730 END TEST accel_wrong_workload 00:15:57.730 ************************************ 00:15:57.730 12:18:07 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:15:57.730 12:18:07 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:57.730 12:18:07 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:57.730 12:18:07 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:57.730 00:15:57.730 real 0m0.096s 00:15:57.730 user 0m0.081s 00:15:57.730 sys 0m0.052s 00:15:57.730 12:18:07 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.730 12:18:07 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:15:57.730 12:18:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:15:57.730 12:18:07 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:15:57.730 12:18:07 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:15:57.730 12:18:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.730 12:18:07 accel -- common/autotest_common.sh@10 -- # set +x 00:15:57.730 ************************************ 00:15:57.730 START TEST accel_negative_buffers 00:15:57.730 ************************************ 00:15:57.730 12:18:07 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:15:57.730 12:18:07 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:15:57.730 12:18:07 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:15:57.730 12:18:07 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:15:57.730 12:18:07 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:57.730 12:18:07 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:15:57.730 12:18:07 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:57.730 12:18:07 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:15:57.730 12:18:07 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:15:57.730 12:18:07 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:15:57.730 12:18:07 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:57.730 12:18:07 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:57.730 12:18:07 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:57.730 12:18:07 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:57.730 12:18:07 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:57.730 12:18:07 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:15:57.730 12:18:07 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:15:57.989 -x option must be non-negative. 00:15:57.989 [2024-07-10 12:18:07.234928] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:15:57.989 accel_perf options: 00:15:57.989 [-h help message] 00:15:57.989 [-q queue depth per core] 00:15:57.989 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:15:57.989 [-T number of threads per core 00:15:57.989 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:15:57.989 [-t time in seconds] 00:15:57.989 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:15:57.989 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:15:57.989 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:15:57.989 [-l for compress/decompress workloads, name of uncompressed input file 00:15:57.989 [-S for crc32c workload, use this seed value (default 0) 00:15:57.989 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:15:57.989 [-f for fill workload, use this BYTE value (default 255) 00:15:57.989 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:15:57.989 [-y verify result if this switch is on] 00:15:57.989 [-a tasks to allocate per core (default: same value as -q)] 00:15:57.989 Can be used to spread operations across a wider range of memory. 00:15:57.989 12:18:07 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:15:57.989 12:18:07 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:57.989 12:18:07 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:57.989 12:18:07 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:57.989 00:15:57.989 real 0m0.079s 00:15:57.989 user 0m0.073s 00:15:57.989 sys 0m0.048s 00:15:57.989 12:18:07 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.989 ************************************ 00:15:57.989 END TEST accel_negative_buffers 00:15:57.989 ************************************ 00:15:57.989 12:18:07 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:15:57.989 12:18:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:15:57.989 12:18:07 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:15:57.989 12:18:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:15:57.989 12:18:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.989 12:18:07 accel -- common/autotest_common.sh@10 -- # set +x 00:15:57.989 ************************************ 00:15:57.989 START TEST accel_crc32c 00:15:57.989 ************************************ 00:15:57.989 12:18:07 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:15:57.989 12:18:07 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:15:57.989 12:18:07 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:15:57.989 12:18:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:57.989 12:18:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:57.989 12:18:07 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:15:57.989 12:18:07 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:15:57.989 12:18:07 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:15:57.989 12:18:07 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:57.989 12:18:07 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:57.989 12:18:07 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:57.989 12:18:07 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:57.989 12:18:07 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:57.989 12:18:07 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:15:57.989 12:18:07 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:15:57.989 [2024-07-10 12:18:07.385593] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:15:57.989 [2024-07-10 12:18:07.385721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65325 ] 00:15:58.247 [2024-07-10 12:18:07.559858] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.507 [2024-07-10 12:18:07.802560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:58.766 12:18:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:00.671 12:18:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:00.671 00:16:00.671 real 0m2.733s 00:16:00.671 user 0m2.434s 00:16:00.671 sys 0m0.206s 00:16:00.671 12:18:10 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:00.671 ************************************ 00:16:00.671 END TEST accel_crc32c 00:16:00.671 ************************************ 00:16:00.671 12:18:10 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:16:00.671 12:18:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:00.671 12:18:10 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:16:00.671 12:18:10 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:00.671 12:18:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:00.671 12:18:10 accel -- common/autotest_common.sh@10 -- # set +x 00:16:00.671 ************************************ 00:16:00.671 START TEST accel_crc32c_C2 00:16:00.671 ************************************ 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:16:00.671 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:16:00.931 [2024-07-10 12:18:10.188618] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:00.931 [2024-07-10 12:18:10.188753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65372 ] 00:16:00.931 [2024-07-10 12:18:10.359879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.189 [2024-07-10 12:18:10.606219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:01.448 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:01.449 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:01.449 12:18:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:03.981 00:16:03.981 real 0m2.735s 00:16:03.981 user 0m2.462s 00:16:03.981 sys 0m0.181s 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:03.981 ************************************ 00:16:03.981 END TEST accel_crc32c_C2 00:16:03.981 ************************************ 00:16:03.981 12:18:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:16:03.981 12:18:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:03.981 12:18:12 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:16:03.981 12:18:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:03.981 12:18:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:03.981 12:18:12 accel -- common/autotest_common.sh@10 -- # set +x 00:16:03.981 ************************************ 00:16:03.981 START TEST accel_copy 00:16:03.981 ************************************ 00:16:03.981 12:18:12 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:16:03.981 12:18:12 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:16:03.981 12:18:12 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:16:03.981 12:18:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:03.981 12:18:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:03.981 12:18:12 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:16:03.981 12:18:12 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:16:03.981 12:18:12 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:16:03.981 12:18:12 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:03.981 12:18:12 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:03.981 12:18:12 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:03.981 12:18:12 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:03.981 12:18:12 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:03.981 12:18:12 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:16:03.981 12:18:12 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:16:03.981 [2024-07-10 12:18:12.985984] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:03.981 [2024-07-10 12:18:12.986111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65424 ] 00:16:03.981 [2024-07-10 12:18:13.155880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.981 [2024-07-10 12:18:13.393532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:04.240 12:18:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:16:06.184 12:18:15 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:06.443 00:16:06.443 real 0m2.739s 00:16:06.443 user 0m2.473s 00:16:06.443 sys 0m0.173s 00:16:06.443 12:18:15 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:06.443 12:18:15 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:16:06.443 ************************************ 00:16:06.443 END TEST accel_copy 00:16:06.443 ************************************ 00:16:06.443 12:18:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:06.443 12:18:15 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:06.443 12:18:15 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:06.443 12:18:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.443 12:18:15 accel -- common/autotest_common.sh@10 -- # set +x 00:16:06.443 ************************************ 00:16:06.443 START TEST accel_fill 00:16:06.443 ************************************ 00:16:06.443 12:18:15 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:06.443 12:18:15 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:16:06.443 12:18:15 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:16:06.443 12:18:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:06.443 12:18:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:06.443 12:18:15 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:06.443 12:18:15 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:06.443 12:18:15 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:16:06.443 12:18:15 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:06.443 12:18:15 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:06.443 12:18:15 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:06.443 12:18:15 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:06.443 12:18:15 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:06.443 12:18:15 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:16:06.443 12:18:15 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:16:06.443 [2024-07-10 12:18:15.783328] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:06.443 [2024-07-10 12:18:15.784137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65469 ] 00:16:06.702 [2024-07-10 12:18:15.955464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.960 [2024-07-10 12:18:16.203855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:07.220 12:18:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:16:09.144 12:18:18 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:09.144 00:16:09.144 real 0m2.743s 00:16:09.144 user 0m2.454s 00:16:09.144 sys 0m0.196s 00:16:09.144 12:18:18 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:09.144 12:18:18 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:16:09.144 ************************************ 00:16:09.144 END TEST accel_fill 00:16:09.144 ************************************ 00:16:09.144 12:18:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:09.144 12:18:18 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:16:09.144 12:18:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:09.144 12:18:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.144 12:18:18 accel -- common/autotest_common.sh@10 -- # set +x 00:16:09.144 ************************************ 00:16:09.144 START TEST accel_copy_crc32c 00:16:09.144 ************************************ 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:16:09.144 12:18:18 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:16:09.144 [2024-07-10 12:18:18.584007] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:09.144 [2024-07-10 12:18:18.584146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65517 ] 00:16:09.403 [2024-07-10 12:18:18.754259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.662 [2024-07-10 12:18:19.001427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.920 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:09.920 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.920 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.920 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.920 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:09.920 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.920 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.920 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:09.921 12:18:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:11.826 00:16:11.826 real 0m2.745s 00:16:11.826 user 0m2.463s 00:16:11.826 sys 0m0.190s 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:11.826 12:18:21 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:16:11.826 ************************************ 00:16:11.826 END TEST accel_copy_crc32c 00:16:11.826 ************************************ 00:16:12.086 12:18:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:12.086 12:18:21 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:16:12.086 12:18:21 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:12.086 12:18:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.086 12:18:21 accel -- common/autotest_common.sh@10 -- # set +x 00:16:12.086 ************************************ 00:16:12.086 START TEST accel_copy_crc32c_C2 00:16:12.086 ************************************ 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:16:12.086 12:18:21 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:16:12.086 [2024-07-10 12:18:21.390879] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:12.086 [2024-07-10 12:18:21.391005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65569 ] 00:16:12.086 [2024-07-10 12:18:21.561046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.345 [2024-07-10 12:18:21.808817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.603 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:12.603 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.603 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:12.604 12:18:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:15.156 00:16:15.156 real 0m2.723s 00:16:15.156 user 0m2.445s 00:16:15.156 sys 0m0.186s 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.156 12:18:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:16:15.156 ************************************ 00:16:15.156 END TEST accel_copy_crc32c_C2 00:16:15.156 ************************************ 00:16:15.156 12:18:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:15.156 12:18:24 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:16:15.156 12:18:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:15.156 12:18:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.156 12:18:24 accel -- common/autotest_common.sh@10 -- # set +x 00:16:15.156 ************************************ 00:16:15.156 START TEST accel_dualcast 00:16:15.156 ************************************ 00:16:15.156 12:18:24 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:16:15.156 12:18:24 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:16:15.156 12:18:24 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:16:15.156 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.156 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.156 12:18:24 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:16:15.157 12:18:24 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:16:15.157 12:18:24 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:16:15.157 12:18:24 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:15.157 12:18:24 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:15.157 12:18:24 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:15.157 12:18:24 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:15.157 12:18:24 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:15.157 12:18:24 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:16:15.157 12:18:24 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:16:15.157 [2024-07-10 12:18:24.176125] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:15.157 [2024-07-10 12:18:24.176259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65616 ] 00:16:15.157 [2024-07-10 12:18:24.347922] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.157 [2024-07-10 12:18:24.590371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:15.416 12:18:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:16:17.945 12:18:26 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:17.945 00:16:17.945 real 0m2.729s 00:16:17.945 user 0m2.441s 00:16:17.945 sys 0m0.198s 00:16:17.945 12:18:26 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:17.945 12:18:26 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:16:17.945 ************************************ 00:16:17.945 END TEST accel_dualcast 00:16:17.945 ************************************ 00:16:17.945 12:18:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:17.945 12:18:26 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:16:17.945 12:18:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:17.945 12:18:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.945 12:18:26 accel -- common/autotest_common.sh@10 -- # set +x 00:16:17.945 ************************************ 00:16:17.945 START TEST accel_compare 00:16:17.945 ************************************ 00:16:17.945 12:18:26 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:16:17.945 12:18:26 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:16:17.945 12:18:26 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:16:17.945 12:18:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:17.945 12:18:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:17.945 12:18:26 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:16:17.946 12:18:26 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:16:17.946 12:18:26 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:16:17.946 12:18:26 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:17.946 12:18:26 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:17.946 12:18:26 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:17.946 12:18:26 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:17.946 12:18:26 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:17.946 12:18:26 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:16:17.946 12:18:26 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:16:17.946 [2024-07-10 12:18:26.976446] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:17.946 [2024-07-10 12:18:26.976581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65663 ] 00:16:17.946 [2024-07-10 12:18:27.147324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.946 [2024-07-10 12:18:27.392136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:18.204 12:18:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:16:20.733 12:18:29 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:20.733 00:16:20.733 real 0m2.745s 00:16:20.733 user 0m2.457s 00:16:20.733 sys 0m0.198s 00:16:20.733 12:18:29 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:20.733 12:18:29 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:16:20.733 ************************************ 00:16:20.733 END TEST accel_compare 00:16:20.733 ************************************ 00:16:20.733 12:18:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:20.733 12:18:29 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:16:20.733 12:18:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:16:20.733 12:18:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.733 12:18:29 accel -- common/autotest_common.sh@10 -- # set +x 00:16:20.733 ************************************ 00:16:20.733 START TEST accel_xor 00:16:20.733 ************************************ 00:16:20.733 12:18:29 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:16:20.733 12:18:29 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:16:20.733 12:18:29 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:16:20.733 12:18:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:20.733 12:18:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:20.733 12:18:29 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:16:20.733 12:18:29 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:16:20.733 12:18:29 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:16:20.733 12:18:29 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:20.733 12:18:29 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:20.733 12:18:29 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:20.733 12:18:29 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:20.733 12:18:29 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:20.733 12:18:29 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:16:20.733 12:18:29 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:16:20.733 [2024-07-10 12:18:29.791199] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:20.733 [2024-07-10 12:18:29.791329] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65714 ] 00:16:20.733 [2024-07-10 12:18:29.970784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.992 [2024-07-10 12:18:30.215501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:20.992 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:21.251 12:18:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:23.154 00:16:23.154 real 0m2.751s 00:16:23.154 user 0m2.488s 00:16:23.154 sys 0m0.177s 00:16:23.154 12:18:32 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:23.154 12:18:32 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:16:23.154 ************************************ 00:16:23.154 END TEST accel_xor 00:16:23.154 ************************************ 00:16:23.154 12:18:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:23.154 12:18:32 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:16:23.154 12:18:32 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:23.154 12:18:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.154 12:18:32 accel -- common/autotest_common.sh@10 -- # set +x 00:16:23.154 ************************************ 00:16:23.154 START TEST accel_xor 00:16:23.154 ************************************ 00:16:23.154 12:18:32 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:16:23.154 12:18:32 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:16:23.154 [2024-07-10 12:18:32.604087] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:23.154 [2024-07-10 12:18:32.604207] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65762 ] 00:16:23.413 [2024-07-10 12:18:32.777822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.684 [2024-07-10 12:18:33.025644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:23.944 12:18:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:16:25.843 12:18:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:25.843 00:16:25.843 real 0m2.751s 00:16:25.843 user 0m2.452s 00:16:25.843 sys 0m0.203s 00:16:25.843 ************************************ 00:16:25.843 END TEST accel_xor 00:16:25.843 ************************************ 00:16:25.843 12:18:35 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:25.843 12:18:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:16:26.101 12:18:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:26.101 12:18:35 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:16:26.101 12:18:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:16:26.101 12:18:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.101 12:18:35 accel -- common/autotest_common.sh@10 -- # set +x 00:16:26.101 ************************************ 00:16:26.101 START TEST accel_dif_verify 00:16:26.101 ************************************ 00:16:26.101 12:18:35 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:16:26.101 12:18:35 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:16:26.101 12:18:35 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:16:26.101 12:18:35 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:16:26.101 12:18:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.101 12:18:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.101 12:18:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:16:26.101 12:18:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:16:26.101 12:18:35 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:26.101 12:18:35 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:26.101 12:18:35 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:26.101 12:18:35 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:26.101 12:18:35 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:26.101 12:18:35 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:16:26.101 12:18:35 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:16:26.101 [2024-07-10 12:18:35.426742] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:26.101 [2024-07-10 12:18:35.427064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65814 ] 00:16:26.436 [2024-07-10 12:18:35.597632] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.436 [2024-07-10 12:18:35.839650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:26.722 12:18:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:16:28.624 12:18:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:16:28.883 12:18:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:28.883 12:18:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:16:28.883 12:18:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:28.883 00:16:28.883 real 0m2.736s 00:16:28.883 user 0m2.442s 00:16:28.883 sys 0m0.203s 00:16:28.883 12:18:38 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:28.883 12:18:38 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:16:28.883 ************************************ 00:16:28.883 END TEST accel_dif_verify 00:16:28.883 ************************************ 00:16:28.883 12:18:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:28.883 12:18:38 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:16:28.883 12:18:38 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:16:28.883 12:18:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.883 12:18:38 accel -- common/autotest_common.sh@10 -- # set +x 00:16:28.883 ************************************ 00:16:28.883 START TEST accel_dif_generate 00:16:28.883 ************************************ 00:16:28.883 12:18:38 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:16:28.883 12:18:38 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:16:28.883 12:18:38 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:16:28.883 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:28.884 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:28.884 12:18:38 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:16:28.884 12:18:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:16:28.884 12:18:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:16:28.884 12:18:38 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:28.884 12:18:38 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:28.884 12:18:38 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:28.884 12:18:38 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:28.884 12:18:38 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:28.884 12:18:38 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:16:28.884 12:18:38 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:16:28.884 [2024-07-10 12:18:38.251344] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:28.884 [2024-07-10 12:18:38.251525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65855 ] 00:16:29.143 [2024-07-10 12:18:38.444085] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.401 [2024-07-10 12:18:38.687969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:29.660 12:18:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:31.564 12:18:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:31.564 12:18:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:31.564 12:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:31.564 12:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:31.564 12:18:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:31.564 12:18:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:31.564 12:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:31.564 12:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:31.564 12:18:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:31.564 12:18:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:16:31.565 ************************************ 00:16:31.565 END TEST accel_dif_generate 00:16:31.565 ************************************ 00:16:31.565 12:18:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:31.565 00:16:31.565 real 0m2.779s 00:16:31.565 user 0m2.489s 00:16:31.565 sys 0m0.205s 00:16:31.565 12:18:40 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:31.565 12:18:40 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:16:31.565 12:18:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:31.565 12:18:41 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:16:31.565 12:18:41 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:16:31.565 12:18:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:31.565 12:18:41 accel -- common/autotest_common.sh@10 -- # set +x 00:16:31.565 ************************************ 00:16:31.565 START TEST accel_dif_generate_copy 00:16:31.565 ************************************ 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:16:31.565 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:16:31.824 [2024-07-10 12:18:41.083491] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:31.824 [2024-07-10 12:18:41.083621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65907 ] 00:16:31.824 [2024-07-10 12:18:41.238988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.083 [2024-07-10 12:18:41.525241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:32.343 12:18:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:34.878 00:16:34.878 real 0m2.776s 00:16:34.878 user 0m2.477s 00:16:34.878 sys 0m0.208s 00:16:34.878 ************************************ 00:16:34.878 END TEST accel_dif_generate_copy 00:16:34.878 ************************************ 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:34.878 12:18:43 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:16:34.878 12:18:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:34.878 12:18:43 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:16:34.878 12:18:43 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:34.878 12:18:43 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:16:34.878 12:18:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.878 12:18:43 accel -- common/autotest_common.sh@10 -- # set +x 00:16:34.878 ************************************ 00:16:34.878 START TEST accel_comp 00:16:34.878 ************************************ 00:16:34.878 12:18:43 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:34.878 12:18:43 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:16:34.878 12:18:43 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:16:34.878 12:18:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:34.878 12:18:43 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:34.878 12:18:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:34.878 12:18:43 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:34.878 12:18:43 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:16:34.878 12:18:43 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:34.878 12:18:43 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:34.878 12:18:43 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:34.878 12:18:43 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:34.878 12:18:43 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:34.878 12:18:43 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:16:34.878 12:18:43 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:16:34.878 [2024-07-10 12:18:43.924388] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:34.878 [2024-07-10 12:18:43.924648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65959 ] 00:16:34.878 [2024-07-10 12:18:44.094425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.878 [2024-07-10 12:18:44.341837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:35.138 12:18:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:16:37.675 12:18:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:37.675 00:16:37.675 real 0m2.752s 00:16:37.675 user 0m2.454s 00:16:37.675 sys 0m0.205s 00:16:37.675 12:18:46 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:37.675 12:18:46 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:16:37.675 ************************************ 00:16:37.675 END TEST accel_comp 00:16:37.675 ************************************ 00:16:37.675 12:18:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:37.675 12:18:46 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:37.675 12:18:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:37.675 12:18:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.675 12:18:46 accel -- common/autotest_common.sh@10 -- # set +x 00:16:37.675 ************************************ 00:16:37.675 START TEST accel_decomp 00:16:37.675 ************************************ 00:16:37.675 12:18:46 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:37.675 12:18:46 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:16:37.675 12:18:46 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:16:37.675 12:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:37.675 12:18:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.675 12:18:46 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:37.675 12:18:46 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:37.675 12:18:46 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:16:37.675 12:18:46 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:37.675 12:18:46 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:37.675 12:18:46 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:37.675 12:18:46 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:37.675 12:18:46 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:37.675 12:18:46 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:16:37.675 12:18:46 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:16:37.675 [2024-07-10 12:18:46.753569] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:37.675 [2024-07-10 12:18:46.753699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66006 ] 00:16:37.675 [2024-07-10 12:18:46.923763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.934 [2024-07-10 12:18:47.170549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.934 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:37.935 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:38.194 12:18:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:40.098 12:18:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:40.098 00:16:40.098 real 0m2.742s 00:16:40.098 user 0m2.448s 00:16:40.098 sys 0m0.210s 00:16:40.098 ************************************ 00:16:40.098 END TEST accel_decomp 00:16:40.098 ************************************ 00:16:40.098 12:18:49 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.098 12:18:49 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:16:40.098 12:18:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:40.098 12:18:49 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:40.098 12:18:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:16:40.098 12:18:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.098 12:18:49 accel -- common/autotest_common.sh@10 -- # set +x 00:16:40.098 ************************************ 00:16:40.098 START TEST accel_decomp_full 00:16:40.098 ************************************ 00:16:40.098 12:18:49 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:40.098 12:18:49 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:16:40.098 12:18:49 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:16:40.098 12:18:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.098 12:18:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.098 12:18:49 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:40.098 12:18:49 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:40.098 12:18:49 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:16:40.098 12:18:49 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:40.098 12:18:49 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:40.098 12:18:49 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:40.098 12:18:49 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:40.099 12:18:49 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:40.099 12:18:49 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:16:40.099 12:18:49 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:16:40.099 [2024-07-10 12:18:49.565726] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:40.099 [2024-07-10 12:18:49.565859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66052 ] 00:16:40.357 [2024-07-10 12:18:49.736141] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.615 [2024-07-10 12:18:49.984196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:40.874 12:18:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:42.776 12:18:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:16:43.033 12:18:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:16:43.033 12:18:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:16:43.033 12:18:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:16:43.033 12:18:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:43.033 12:18:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:43.033 12:18:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:43.033 00:16:43.033 real 0m2.770s 00:16:43.033 user 0m2.479s 00:16:43.033 sys 0m0.196s 00:16:43.033 12:18:52 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:43.033 ************************************ 00:16:43.033 END TEST accel_decomp_full 00:16:43.033 ************************************ 00:16:43.033 12:18:52 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:16:43.033 12:18:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:43.033 12:18:52 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:43.033 12:18:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:16:43.033 12:18:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:43.033 12:18:52 accel -- common/autotest_common.sh@10 -- # set +x 00:16:43.033 ************************************ 00:16:43.033 START TEST accel_decomp_mcore 00:16:43.033 ************************************ 00:16:43.033 12:18:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:43.033 12:18:52 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:16:43.033 12:18:52 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:16:43.034 12:18:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.034 12:18:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.034 12:18:52 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:43.034 12:18:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:16:43.034 12:18:52 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:43.034 12:18:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:43.034 12:18:52 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:43.034 12:18:52 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:43.034 12:18:52 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:43.034 12:18:52 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:43.034 12:18:52 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:16:43.034 12:18:52 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:16:43.034 [2024-07-10 12:18:52.401352] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:43.034 [2024-07-10 12:18:52.401526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66099 ] 00:16:43.291 [2024-07-10 12:18:52.573465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.549 [2024-07-10 12:18:52.818665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.549 [2024-07-10 12:18:52.818866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.549 [2024-07-10 12:18:52.819033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.549 [2024-07-10 12:18:52.819074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.806 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:43.806 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.806 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.806 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.806 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:43.807 12:18:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:45.706 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:45.706 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:45.706 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:45.706 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:45.706 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:45.706 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:45.706 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:45.706 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:45.706 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:45.706 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:45.706 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:45.706 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:45.707 00:16:45.707 real 0m2.768s 00:16:45.707 user 0m7.968s 00:16:45.707 sys 0m0.213s 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:45.707 12:18:55 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:16:45.707 ************************************ 00:16:45.707 END TEST accel_decomp_mcore 00:16:45.707 ************************************ 00:16:45.707 12:18:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:45.707 12:18:55 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:45.707 12:18:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:45.707 12:18:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.707 12:18:55 accel -- common/autotest_common.sh@10 -- # set +x 00:16:45.707 ************************************ 00:16:45.707 START TEST accel_decomp_full_mcore 00:16:45.707 ************************************ 00:16:45.707 12:18:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:45.707 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:16:45.707 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:16:45.965 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:45.965 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:45.965 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:45.965 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:45.965 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:16:45.965 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:45.965 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:45.965 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:45.965 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:45.965 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:45.965 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:16:45.965 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:16:45.965 [2024-07-10 12:18:55.243345] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:45.965 [2024-07-10 12:18:55.243485] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66154 ] 00:16:45.965 [2024-07-10 12:18:55.411527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.223 [2024-07-10 12:18:55.663989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.223 [2024-07-10 12:18:55.664174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.223 [2024-07-10 12:18:55.664288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.223 [2024-07-10 12:18:55.664336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.481 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:46.481 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.481 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.481 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.481 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:46.481 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.481 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:46.482 12:18:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:16:49.010 12:18:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:16:49.010 12:18:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:49.010 12:18:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:49.010 12:18:58 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:49.010 00:16:49.010 real 0m2.830s 00:16:49.010 user 0m0.023s 00:16:49.010 sys 0m0.003s 00:16:49.010 ************************************ 00:16:49.010 END TEST accel_decomp_full_mcore 00:16:49.010 ************************************ 00:16:49.010 12:18:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:49.010 12:18:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:16:49.010 12:18:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:49.010 12:18:58 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:49.010 12:18:58 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:16:49.010 12:18:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.010 12:18:58 accel -- common/autotest_common.sh@10 -- # set +x 00:16:49.010 ************************************ 00:16:49.010 START TEST accel_decomp_mthread 00:16:49.010 ************************************ 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:16:49.010 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:16:49.010 [2024-07-10 12:18:58.133987] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:49.010 [2024-07-10 12:18:58.134132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66209 ] 00:16:49.010 [2024-07-10 12:18:58.304718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.269 [2024-07-10 12:18:58.553284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:49.528 12:18:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:51.431 00:16:51.431 real 0m2.760s 00:16:51.431 user 0m2.467s 00:16:51.431 sys 0m0.203s 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:51.431 ************************************ 00:16:51.431 END TEST accel_decomp_mthread 00:16:51.431 ************************************ 00:16:51.431 12:19:00 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:16:51.431 12:19:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:51.431 12:19:00 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:51.431 12:19:00 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:51.431 12:19:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.431 12:19:00 accel -- common/autotest_common.sh@10 -- # set +x 00:16:51.431 ************************************ 00:16:51.431 START TEST accel_decomp_full_mthread 00:16:51.431 ************************************ 00:16:51.431 12:19:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:51.431 12:19:00 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:16:51.431 12:19:00 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:16:51.431 12:19:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:51.431 12:19:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:51.431 12:19:00 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:51.431 12:19:00 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:51.431 12:19:00 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:16:51.691 12:19:00 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:51.691 12:19:00 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:51.691 12:19:00 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:51.691 12:19:00 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:51.691 12:19:00 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:51.691 12:19:00 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:16:51.691 12:19:00 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:16:51.691 [2024-07-10 12:19:00.961675] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:51.691 [2024-07-10 12:19:00.961822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66250 ] 00:16:51.691 [2024-07-10 12:19:01.134576] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.949 [2024-07-10 12:19:01.377218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.206 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:52.206 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.206 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.206 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.206 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:52.206 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.206 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.206 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.206 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:52.206 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.206 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:52.207 12:19:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:54.748 00:16:54.748 real 0m2.786s 00:16:54.748 user 0m2.492s 00:16:54.748 sys 0m0.203s 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:54.748 12:19:03 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:16:54.748 ************************************ 00:16:54.748 END TEST accel_decomp_full_mthread 00:16:54.748 ************************************ 00:16:54.748 12:19:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:54.748 12:19:03 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:16:54.748 12:19:03 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:16:54.748 12:19:03 accel -- accel/accel.sh@137 -- # build_accel_config 00:16:54.748 12:19:03 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:54.748 12:19:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:54.748 12:19:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.748 12:19:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:54.748 12:19:03 accel -- common/autotest_common.sh@10 -- # set +x 00:16:54.748 12:19:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:54.748 12:19:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:54.748 12:19:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:54.748 12:19:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:16:54.748 12:19:03 accel -- accel/accel.sh@41 -- # jq -r . 00:16:54.748 ************************************ 00:16:54.748 START TEST accel_dif_functional_tests 00:16:54.748 ************************************ 00:16:54.748 12:19:03 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:16:54.748 [2024-07-10 12:19:03.854311] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:54.748 [2024-07-10 12:19:03.854442] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66303 ] 00:16:54.748 [2024-07-10 12:19:04.029108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:55.006 [2024-07-10 12:19:04.279897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.006 [2024-07-10 12:19:04.280003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.006 [2024-07-10 12:19:04.280027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.263 00:16:55.263 00:16:55.263 CUnit - A unit testing framework for C - Version 2.1-3 00:16:55.263 http://cunit.sourceforge.net/ 00:16:55.263 00:16:55.263 00:16:55.263 Suite: accel_dif 00:16:55.263 Test: verify: DIF generated, GUARD check ...passed 00:16:55.263 Test: verify: DIF generated, APPTAG check ...passed 00:16:55.263 Test: verify: DIF generated, REFTAG check ...passed 00:16:55.263 Test: verify: DIF not generated, GUARD check ...passed 00:16:55.263 Test: verify: DIF not generated, APPTAG check ...passed 00:16:55.263 Test: verify: DIF not generated, REFTAG check ...passed 00:16:55.263 Test: verify: APPTAG correct, APPTAG check ...[2024-07-10 12:19:04.645241] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:16:55.263 [2024-07-10 12:19:04.645335] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:16:55.263 [2024-07-10 12:19:04.645381] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:16:55.263 passed 00:16:55.263 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:16:55.263 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:16:55.263 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:16:55.263 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:16:55.263 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:16:55.263 Test: verify copy: DIF generated, GUARD check ...passed 00:16:55.263 Test: verify copy: DIF generated, APPTAG check ...[2024-07-10 12:19:04.645464] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:16:55.263 [2024-07-10 12:19:04.645627] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:16:55.263 passed 00:16:55.263 Test: verify copy: DIF generated, REFTAG check ...passed 00:16:55.263 Test: verify copy: DIF not generated, GUARD check ...passed 00:16:55.263 Test: verify copy: DIF not generated, APPTAG check ...passed 00:16:55.263 Test: verify copy: DIF not generated, REFTAG check ...passed 00:16:55.263 Test: generate copy: DIF generated, GUARD check ...[2024-07-10 12:19:04.645846] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:16:55.263 [2024-07-10 12:19:04.645898] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:16:55.263 [2024-07-10 12:19:04.645945] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:16:55.263 passed 00:16:55.263 Test: generate copy: DIF generated, APTTAG check ...passed 00:16:55.263 Test: generate copy: DIF generated, REFTAG check ...passed 00:16:55.263 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:16:55.263 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:16:55.263 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:16:55.263 Test: generate copy: iovecs-len validate ...passed 00:16:55.263 Test: generate copy: buffer alignment validate ...passed 00:16:55.263 00:16:55.263 Run Summary: Type Total Ran Passed Failed Inactive 00:16:55.263 suites 1 1 n/a 0 0 00:16:55.263 tests 26 26 26 0 0 00:16:55.263 asserts 115 115 115 0 n/a 00:16:55.263 00:16:55.263 Elapsed time = 0.003 seconds 00:16:55.263 [2024-07-10 12:19:04.646276] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:16:56.637 00:16:56.637 real 0m2.203s 00:16:56.637 user 0m4.238s 00:16:56.637 sys 0m0.278s 00:16:56.637 12:19:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:56.637 12:19:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:16:56.637 ************************************ 00:16:56.637 END TEST accel_dif_functional_tests 00:16:56.637 ************************************ 00:16:56.637 12:19:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:16:56.637 00:16:56.637 real 1m6.970s 00:16:56.637 user 1m12.346s 00:16:56.637 sys 0m6.497s 00:16:56.637 12:19:06 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:56.637 12:19:06 accel -- common/autotest_common.sh@10 -- # set +x 00:16:56.637 ************************************ 00:16:56.637 END TEST accel 00:16:56.637 ************************************ 00:16:56.637 12:19:06 -- common/autotest_common.sh@1142 -- # return 0 00:16:56.637 12:19:06 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:16:56.637 12:19:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:56.637 12:19:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.637 12:19:06 -- common/autotest_common.sh@10 -- # set +x 00:16:56.637 ************************************ 00:16:56.637 START TEST accel_rpc 00:16:56.637 ************************************ 00:16:56.637 12:19:06 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:16:56.895 * Looking for test storage... 00:16:56.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:16:56.895 12:19:06 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:56.895 12:19:06 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66390 00:16:56.895 12:19:06 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:56.895 12:19:06 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 66390 00:16:56.895 12:19:06 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 66390 ']' 00:16:56.895 12:19:06 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.895 12:19:06 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.895 12:19:06 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.895 12:19:06 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.895 12:19:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.895 [2024-07-10 12:19:06.328659] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:16:56.895 [2024-07-10 12:19:06.328821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66390 ] 00:16:57.153 [2024-07-10 12:19:06.502394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.411 [2024-07-10 12:19:06.750911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.670 12:19:07 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.670 12:19:07 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:16:57.670 12:19:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:16:57.670 12:19:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:16:57.670 12:19:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:16:57.670 12:19:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:16:57.670 12:19:07 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:16:57.670 12:19:07 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:57.670 12:19:07 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.670 12:19:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.670 ************************************ 00:16:57.670 START TEST accel_assign_opcode 00:16:57.670 ************************************ 00:16:57.670 12:19:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:16:57.928 12:19:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:16:57.928 12:19:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.928 12:19:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:57.928 [2024-07-10 12:19:07.155765] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:16:57.928 12:19:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.928 12:19:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:16:57.928 12:19:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.928 12:19:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:57.928 [2024-07-10 12:19:07.167652] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:16:57.928 12:19:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.928 12:19:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:16:57.928 12:19:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.928 12:19:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:58.865 12:19:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.865 12:19:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:16:58.865 12:19:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:16:58.865 12:19:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:16:58.865 12:19:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.865 12:19:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:58.865 12:19:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.865 software 00:16:58.865 ************************************ 00:16:58.865 END TEST accel_assign_opcode 00:16:58.865 ************************************ 00:16:58.865 00:16:58.865 real 0m0.978s 00:16:58.865 user 0m0.048s 00:16:58.865 sys 0m0.020s 00:16:58.865 12:19:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:58.865 12:19:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:16:58.865 12:19:08 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:16:58.865 12:19:08 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 66390 00:16:58.865 12:19:08 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 66390 ']' 00:16:58.865 12:19:08 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 66390 00:16:58.865 12:19:08 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:16:58.865 12:19:08 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:58.865 12:19:08 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66390 00:16:58.865 killing process with pid 66390 00:16:58.865 12:19:08 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:58.865 12:19:08 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:58.865 12:19:08 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66390' 00:16:58.865 12:19:08 accel_rpc -- common/autotest_common.sh@967 -- # kill 66390 00:16:58.865 12:19:08 accel_rpc -- common/autotest_common.sh@972 -- # wait 66390 00:17:01.395 ************************************ 00:17:01.395 END TEST accel_rpc 00:17:01.395 ************************************ 00:17:01.395 00:17:01.395 real 0m4.669s 00:17:01.395 user 0m4.481s 00:17:01.395 sys 0m0.626s 00:17:01.395 12:19:10 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:01.395 12:19:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.395 12:19:10 -- common/autotest_common.sh@1142 -- # return 0 00:17:01.395 12:19:10 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:01.395 12:19:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:01.395 12:19:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.395 12:19:10 -- common/autotest_common.sh@10 -- # set +x 00:17:01.395 ************************************ 00:17:01.395 START TEST app_cmdline 00:17:01.395 ************************************ 00:17:01.395 12:19:10 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:01.653 * Looking for test storage... 00:17:01.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:01.653 12:19:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:17:01.653 12:19:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=66512 00:17:01.653 12:19:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:17:01.653 12:19:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 66512 00:17:01.653 12:19:10 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 66512 ']' 00:17:01.653 12:19:10 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.653 12:19:10 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.653 12:19:10 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.653 12:19:10 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.653 12:19:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:01.653 [2024-07-10 12:19:11.066541] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:01.653 [2024-07-10 12:19:11.066890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66512 ] 00:17:01.910 [2024-07-10 12:19:11.240423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.168 [2024-07-10 12:19:11.489311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.104 12:19:12 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.104 12:19:12 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:17:03.104 12:19:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:17:03.363 { 00:17:03.363 "version": "SPDK v24.09-pre git sha1 968224f46", 00:17:03.363 "fields": { 00:17:03.363 "major": 24, 00:17:03.363 "minor": 9, 00:17:03.363 "patch": 0, 00:17:03.363 "suffix": "-pre", 00:17:03.363 "commit": "968224f46" 00:17:03.363 } 00:17:03.363 } 00:17:03.363 12:19:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:17:03.363 12:19:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:17:03.363 12:19:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:17:03.363 12:19:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:17:03.363 12:19:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:17:03.363 12:19:12 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.363 12:19:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:17:03.363 12:19:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:03.363 12:19:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:17:03.363 12:19:12 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.363 12:19:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:17:03.363 12:19:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:17:03.363 12:19:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:03.363 12:19:12 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:17:03.363 12:19:12 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:03.363 12:19:12 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.363 12:19:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.363 12:19:12 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.363 12:19:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.363 12:19:12 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.363 12:19:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.363 12:19:12 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.363 12:19:12 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:03.363 12:19:12 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:03.621 request: 00:17:03.621 { 00:17:03.621 "method": "env_dpdk_get_mem_stats", 00:17:03.621 "req_id": 1 00:17:03.621 } 00:17:03.621 Got JSON-RPC error response 00:17:03.621 response: 00:17:03.621 { 00:17:03.621 "code": -32601, 00:17:03.621 "message": "Method not found" 00:17:03.621 } 00:17:03.621 12:19:12 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:17:03.621 12:19:12 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:03.621 12:19:12 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:03.621 12:19:12 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:03.621 12:19:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 66512 00:17:03.621 12:19:12 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 66512 ']' 00:17:03.621 12:19:12 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 66512 00:17:03.621 12:19:12 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:17:03.621 12:19:12 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.621 12:19:12 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66512 00:17:03.621 killing process with pid 66512 00:17:03.621 12:19:12 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:03.621 12:19:12 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:03.621 12:19:12 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66512' 00:17:03.621 12:19:12 app_cmdline -- common/autotest_common.sh@967 -- # kill 66512 00:17:03.621 12:19:12 app_cmdline -- common/autotest_common.sh@972 -- # wait 66512 00:17:06.154 ************************************ 00:17:06.154 END TEST app_cmdline 00:17:06.154 ************************************ 00:17:06.154 00:17:06.154 real 0m4.597s 00:17:06.154 user 0m4.723s 00:17:06.154 sys 0m0.644s 00:17:06.154 12:19:15 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:06.154 12:19:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:17:06.154 12:19:15 -- common/autotest_common.sh@1142 -- # return 0 00:17:06.154 12:19:15 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:06.154 12:19:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:06.154 12:19:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.154 12:19:15 -- common/autotest_common.sh@10 -- # set +x 00:17:06.154 ************************************ 00:17:06.154 START TEST version 00:17:06.154 ************************************ 00:17:06.154 12:19:15 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:06.154 * Looking for test storage... 00:17:06.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:06.154 12:19:15 version -- app/version.sh@17 -- # get_header_version major 00:17:06.154 12:19:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:06.154 12:19:15 version -- app/version.sh@14 -- # cut -f2 00:17:06.154 12:19:15 version -- app/version.sh@14 -- # tr -d '"' 00:17:06.155 12:19:15 version -- app/version.sh@17 -- # major=24 00:17:06.155 12:19:15 version -- app/version.sh@18 -- # get_header_version minor 00:17:06.155 12:19:15 version -- app/version.sh@14 -- # cut -f2 00:17:06.155 12:19:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:06.155 12:19:15 version -- app/version.sh@14 -- # tr -d '"' 00:17:06.414 12:19:15 version -- app/version.sh@18 -- # minor=9 00:17:06.414 12:19:15 version -- app/version.sh@19 -- # get_header_version patch 00:17:06.414 12:19:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:06.414 12:19:15 version -- app/version.sh@14 -- # cut -f2 00:17:06.414 12:19:15 version -- app/version.sh@14 -- # tr -d '"' 00:17:06.414 12:19:15 version -- app/version.sh@19 -- # patch=0 00:17:06.414 12:19:15 version -- app/version.sh@20 -- # get_header_version suffix 00:17:06.414 12:19:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:06.414 12:19:15 version -- app/version.sh@14 -- # cut -f2 00:17:06.414 12:19:15 version -- app/version.sh@14 -- # tr -d '"' 00:17:06.414 12:19:15 version -- app/version.sh@20 -- # suffix=-pre 00:17:06.414 12:19:15 version -- app/version.sh@22 -- # version=24.9 00:17:06.414 12:19:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:17:06.414 12:19:15 version -- app/version.sh@28 -- # version=24.9rc0 00:17:06.414 12:19:15 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:06.414 12:19:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:17:06.414 12:19:15 version -- app/version.sh@30 -- # py_version=24.9rc0 00:17:06.414 12:19:15 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:17:06.414 00:17:06.414 real 0m0.216s 00:17:06.414 user 0m0.119s 00:17:06.414 sys 0m0.151s 00:17:06.414 12:19:15 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:06.414 12:19:15 version -- common/autotest_common.sh@10 -- # set +x 00:17:06.414 ************************************ 00:17:06.414 END TEST version 00:17:06.414 ************************************ 00:17:06.414 12:19:15 -- common/autotest_common.sh@1142 -- # return 0 00:17:06.414 12:19:15 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:17:06.414 12:19:15 -- spdk/autotest.sh@198 -- # uname -s 00:17:06.414 12:19:15 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:17:06.414 12:19:15 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:17:06.414 12:19:15 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:17:06.414 12:19:15 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:17:06.414 12:19:15 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:17:06.414 12:19:15 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:06.414 12:19:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.414 12:19:15 -- common/autotest_common.sh@10 -- # set +x 00:17:06.414 ************************************ 00:17:06.414 START TEST blockdev_nvme 00:17:06.414 ************************************ 00:17:06.414 12:19:15 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:17:06.673 * Looking for test storage... 00:17:06.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:06.673 12:19:15 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:06.673 12:19:15 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:17:06.673 12:19:15 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:06.673 12:19:15 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:06.673 12:19:15 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:06.673 12:19:15 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:06.673 12:19:15 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:06.673 12:19:15 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:06.673 12:19:15 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:17:06.673 12:19:15 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:17:06.673 12:19:15 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:17:06.673 12:19:15 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66685 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:06.674 12:19:15 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 66685 00:17:06.674 12:19:15 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 66685 ']' 00:17:06.674 12:19:15 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.674 12:19:15 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:06.674 12:19:15 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.674 12:19:15 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:06.674 12:19:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:06.674 [2024-07-10 12:19:16.046009] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:06.674 [2024-07-10 12:19:16.046371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66685 ] 00:17:06.933 [2024-07-10 12:19:16.217498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.192 [2024-07-10 12:19:16.461241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.127 12:19:17 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.127 12:19:17 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:17:08.127 12:19:17 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:17:08.127 12:19:17 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:17:08.127 12:19:17 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:17:08.127 12:19:17 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:17:08.127 12:19:17 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:08.127 12:19:17 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:17:08.127 12:19:17 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.127 12:19:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:08.387 12:19:17 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.387 12:19:17 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:17:08.387 12:19:17 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.387 12:19:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:08.387 12:19:17 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.387 12:19:17 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:17:08.387 12:19:17 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:17:08.387 12:19:17 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.387 12:19:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:08.387 12:19:17 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.387 12:19:17 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:17:08.387 12:19:17 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.387 12:19:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:08.648 12:19:17 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.648 12:19:17 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:08.648 12:19:17 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.648 12:19:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:08.648 12:19:17 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.648 12:19:17 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:17:08.648 12:19:17 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:17:08.648 12:19:17 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.648 12:19:17 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:17:08.648 12:19:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:08.648 12:19:18 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.648 12:19:18 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:17:08.648 12:19:18 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:17:08.648 12:19:18 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "064507a9-de11-4e00-97be-42ca7633c0f1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "064507a9-de11-4e00-97be-42ca7633c0f1",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d7669bc8-e8b0-4226-91c8-4c3b2198fadb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d7669bc8-e8b0-4226-91c8-4c3b2198fadb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a8c0a33a-f342-4d65-a3ce-c77511453fa7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a8c0a33a-f342-4d65-a3ce-c77511453fa7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b5fbaad8-1813-42b0-8f38-008d5e2bc09e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b5fbaad8-1813-42b0-8f38-008d5e2bc09e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "0f549e27-2fdb-47df-9bd9-ca3a138a2864"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0f549e27-2fdb-47df-9bd9-ca3a138a2864",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d8ac0aba-e60e-4d88-868e-f52e83d7ae73"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d8ac0aba-e60e-4d88-868e-f52e83d7ae73",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:17:08.648 12:19:18 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:17:08.648 12:19:18 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:17:08.648 12:19:18 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:17:08.648 12:19:18 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 66685 00:17:08.648 12:19:18 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 66685 ']' 00:17:08.648 12:19:18 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 66685 00:17:08.648 12:19:18 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:17:08.648 12:19:18 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.648 12:19:18 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66685 00:17:08.648 12:19:18 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:08.648 12:19:18 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:08.648 killing process with pid 66685 00:17:08.648 12:19:18 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66685' 00:17:08.648 12:19:18 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 66685 00:17:08.648 12:19:18 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 66685 00:17:11.181 12:19:20 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:11.181 12:19:20 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:17:11.181 12:19:20 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:11.181 12:19:20 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:11.181 12:19:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:11.181 ************************************ 00:17:11.181 START TEST bdev_hello_world 00:17:11.181 ************************************ 00:17:11.181 12:19:20 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:17:11.440 [2024-07-10 12:19:20.667316] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:11.440 [2024-07-10 12:19:20.667455] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66784 ] 00:17:11.440 [2024-07-10 12:19:20.840839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.725 [2024-07-10 12:19:21.083131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.661 [2024-07-10 12:19:21.782340] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:12.661 [2024-07-10 12:19:21.782405] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:17:12.661 [2024-07-10 12:19:21.782455] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:12.661 [2024-07-10 12:19:21.785427] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:12.661 [2024-07-10 12:19:21.786060] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:12.661 [2024-07-10 12:19:21.786099] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:12.661 [2024-07-10 12:19:21.786343] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:12.661 00:17:12.661 [2024-07-10 12:19:21.786366] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:13.598 00:17:13.598 real 0m2.420s 00:17:13.598 user 0m2.034s 00:17:13.598 sys 0m0.280s 00:17:13.598 ************************************ 00:17:13.598 END TEST bdev_hello_world 00:17:13.598 ************************************ 00:17:13.598 12:19:22 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.598 12:19:22 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:13.598 12:19:23 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:17:13.598 12:19:23 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:17:13.598 12:19:23 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:13.598 12:19:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.598 12:19:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:13.598 ************************************ 00:17:13.598 START TEST bdev_bounds 00:17:13.598 ************************************ 00:17:13.598 12:19:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:17:13.598 12:19:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=66833 00:17:13.598 12:19:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:13.598 12:19:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:13.598 Process bdevio pid: 66833 00:17:13.598 12:19:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66833' 00:17:13.598 12:19:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 66833 00:17:13.598 12:19:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 66833 ']' 00:17:13.598 12:19:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.598 12:19:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.598 12:19:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.598 12:19:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.598 12:19:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:13.857 [2024-07-10 12:19:23.165485] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:13.857 [2024-07-10 12:19:23.165636] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66833 ] 00:17:14.116 [2024-07-10 12:19:23.339050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:14.116 [2024-07-10 12:19:23.588171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.116 [2024-07-10 12:19:23.588304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.116 [2024-07-10 12:19:23.588347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.050 12:19:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.050 12:19:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:17:15.050 12:19:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:15.050 I/O targets: 00:17:15.050 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:15.050 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:15.050 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:15.050 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:15.050 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:15.050 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:15.050 00:17:15.050 00:17:15.050 CUnit - A unit testing framework for C - Version 2.1-3 00:17:15.050 http://cunit.sourceforge.net/ 00:17:15.050 00:17:15.050 00:17:15.050 Suite: bdevio tests on: Nvme3n1 00:17:15.050 Test: blockdev write read block ...passed 00:17:15.050 Test: blockdev write zeroes read block ...passed 00:17:15.050 Test: blockdev write zeroes read no split ...passed 00:17:15.050 Test: blockdev write zeroes read split ...passed 00:17:15.050 Test: blockdev write zeroes read split partial ...passed 00:17:15.050 Test: blockdev reset ...[2024-07-10 12:19:24.471995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:17:15.050 [2024-07-10 12:19:24.476000] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:15.050 passed 00:17:15.050 Test: blockdev write read 8 blocks ...passed 00:17:15.050 Test: blockdev write read size > 128k ...passed 00:17:15.050 Test: blockdev write read invalid size ...passed 00:17:15.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:15.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:15.050 Test: blockdev write read max offset ...passed 00:17:15.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:15.050 Test: blockdev writev readv 8 blocks ...passed 00:17:15.050 Test: blockdev writev readv 30 x 1block ...passed 00:17:15.050 Test: blockdev writev readv block ...passed 00:17:15.050 Test: blockdev writev readv size > 128k ...passed 00:17:15.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:15.050 Test: blockdev comparev and writev ...[2024-07-10 12:19:24.485879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26560a000 len:0x1000 00:17:15.050 [2024-07-10 12:19:24.485928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:15.050 passed 00:17:15.050 Test: blockdev nvme passthru rw ...passed 00:17:15.050 Test: blockdev nvme passthru vendor specific ...passed 00:17:15.050 Test: blockdev nvme admin passthru ...[2024-07-10 12:19:24.486951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:15.050 [2024-07-10 12:19:24.486994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:15.050 passed 00:17:15.050 Test: blockdev copy ...passed 00:17:15.050 Suite: bdevio tests on: Nvme2n3 00:17:15.050 Test: blockdev write read block ...passed 00:17:15.050 Test: blockdev write zeroes read block ...passed 00:17:15.050 Test: blockdev write zeroes read no split ...passed 00:17:15.309 Test: blockdev write zeroes read split ...passed 00:17:15.309 Test: blockdev write zeroes read split partial ...passed 00:17:15.309 Test: blockdev reset ...[2024-07-10 12:19:24.563193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:17:15.309 [2024-07-10 12:19:24.567382] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:15.309 passed 00:17:15.309 Test: blockdev write read 8 blocks ...passed 00:17:15.309 Test: blockdev write read size > 128k ...passed 00:17:15.309 Test: blockdev write read invalid size ...passed 00:17:15.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:15.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:15.309 Test: blockdev write read max offset ...passed 00:17:15.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:15.309 Test: blockdev writev readv 8 blocks ...passed 00:17:15.309 Test: blockdev writev readv 30 x 1block ...passed 00:17:15.309 Test: blockdev writev readv block ...passed 00:17:15.309 Test: blockdev writev readv size > 128k ...passed 00:17:15.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:15.309 Test: blockdev comparev and writev ...[2024-07-10 12:19:24.576290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x274e04000 len:0x1000 00:17:15.309 [2024-07-10 12:19:24.576338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:15.309 passed 00:17:15.309 Test: blockdev nvme passthru rw ...passed 00:17:15.309 Test: blockdev nvme passthru vendor specific ...passed 00:17:15.309 Test: blockdev nvme admin passthru ...[2024-07-10 12:19:24.577176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:15.309 [2024-07-10 12:19:24.577211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:15.309 passed 00:17:15.309 Test: blockdev copy ...passed 00:17:15.309 Suite: bdevio tests on: Nvme2n2 00:17:15.309 Test: blockdev write read block ...passed 00:17:15.309 Test: blockdev write zeroes read block ...passed 00:17:15.309 Test: blockdev write zeroes read no split ...passed 00:17:15.309 Test: blockdev write zeroes read split ...passed 00:17:15.309 Test: blockdev write zeroes read split partial ...passed 00:17:15.309 Test: blockdev reset ...[2024-07-10 12:19:24.659484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:17:15.309 [2024-07-10 12:19:24.663596] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:15.309 passed 00:17:15.309 Test: blockdev write read 8 blocks ...passed 00:17:15.309 Test: blockdev write read size > 128k ...passed 00:17:15.309 Test: blockdev write read invalid size ...passed 00:17:15.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:15.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:15.309 Test: blockdev write read max offset ...passed 00:17:15.309 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:15.309 Test: blockdev writev readv 8 blocks ...passed 00:17:15.309 Test: blockdev writev readv 30 x 1block ...passed 00:17:15.309 Test: blockdev writev readv block ...passed 00:17:15.309 Test: blockdev writev readv size > 128k ...passed 00:17:15.309 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:15.309 Test: blockdev comparev and writev ...[2024-07-10 12:19:24.673123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27103a000 len:0x1000 00:17:15.309 [2024-07-10 12:19:24.673301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:15.309 passed 00:17:15.309 Test: blockdev nvme passthru rw ...passed 00:17:15.309 Test: blockdev nvme passthru vendor specific ...[2024-07-10 12:19:24.674542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:15.309 [2024-07-10 12:19:24.674634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:17:15.309 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:17:15.309 passed 00:17:15.309 Test: blockdev copy ...passed 00:17:15.309 Suite: bdevio tests on: Nvme2n1 00:17:15.309 Test: blockdev write read block ...passed 00:17:15.309 Test: blockdev write zeroes read block ...passed 00:17:15.309 Test: blockdev write zeroes read no split ...passed 00:17:15.309 Test: blockdev write zeroes read split ...passed 00:17:15.309 Test: blockdev write zeroes read split partial ...passed 00:17:15.309 Test: blockdev reset ...[2024-07-10 12:19:24.755863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:17:15.310 [2024-07-10 12:19:24.760220] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:15.310 passed 00:17:15.310 Test: blockdev write read 8 blocks ...passed 00:17:15.310 Test: blockdev write read size > 128k ...passed 00:17:15.310 Test: blockdev write read invalid size ...passed 00:17:15.310 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:15.310 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:15.310 Test: blockdev write read max offset ...passed 00:17:15.310 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:15.310 Test: blockdev writev readv 8 blocks ...passed 00:17:15.310 Test: blockdev writev readv 30 x 1block ...passed 00:17:15.310 Test: blockdev writev readv block ...passed 00:17:15.310 Test: blockdev writev readv size > 128k ...passed 00:17:15.310 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:15.310 Test: blockdev comparev and writev ...[2024-07-10 12:19:24.769397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x271034000 len:0x1000 00:17:15.310 [2024-07-10 12:19:24.769450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:15.310 passed 00:17:15.310 Test: blockdev nvme passthru rw ...passed 00:17:15.310 Test: blockdev nvme passthru vendor specific ...[2024-07-10 12:19:24.770356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:15.310 passed 00:17:15.310 Test: blockdev nvme admin passthru ...[2024-07-10 12:19:24.770387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:15.310 passed 00:17:15.310 Test: blockdev copy ...passed 00:17:15.310 Suite: bdevio tests on: Nvme1n1 00:17:15.310 Test: blockdev write read block ...passed 00:17:15.310 Test: blockdev write zeroes read block ...passed 00:17:15.310 Test: blockdev write zeroes read no split ...passed 00:17:15.568 Test: blockdev write zeroes read split ...passed 00:17:15.568 Test: blockdev write zeroes read split partial ...passed 00:17:15.568 Test: blockdev reset ...[2024-07-10 12:19:24.855852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:17:15.568 [2024-07-10 12:19:24.859584] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:15.568 passed 00:17:15.568 Test: blockdev write read 8 blocks ...passed 00:17:15.568 Test: blockdev write read size > 128k ...passed 00:17:15.568 Test: blockdev write read invalid size ...passed 00:17:15.568 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:15.568 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:15.568 Test: blockdev write read max offset ...passed 00:17:15.568 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:15.568 Test: blockdev writev readv 8 blocks ...passed 00:17:15.568 Test: blockdev writev readv 30 x 1block ...passed 00:17:15.568 Test: blockdev writev readv block ...passed 00:17:15.568 Test: blockdev writev readv size > 128k ...passed 00:17:15.568 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:15.568 Test: blockdev comparev and writev ...[2024-07-10 12:19:24.869518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x271030000 len:0x1000 00:17:15.568 [2024-07-10 12:19:24.869690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:17:15.568 passed 00:17:15.568 Test: blockdev nvme passthru rw ...passed 00:17:15.568 Test: blockdev nvme passthru vendor specific ...[2024-07-10 12:19:24.870959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:17:15.568 [2024-07-10 12:19:24.871106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:17:15.568 passed 00:17:15.568 Test: blockdev nvme admin passthru ...passed 00:17:15.568 Test: blockdev copy ...passed 00:17:15.568 Suite: bdevio tests on: Nvme0n1 00:17:15.568 Test: blockdev write read block ...passed 00:17:15.568 Test: blockdev write zeroes read block ...passed 00:17:15.568 Test: blockdev write zeroes read no split ...passed 00:17:15.568 Test: blockdev write zeroes read split ...passed 00:17:15.568 Test: blockdev write zeroes read split partial ...passed 00:17:15.568 Test: blockdev reset ...[2024-07-10 12:19:24.953421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:17:15.568 [2024-07-10 12:19:24.957408] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:15.568 passed 00:17:15.568 Test: blockdev write read 8 blocks ...passed 00:17:15.568 Test: blockdev write read size > 128k ...passed 00:17:15.568 Test: blockdev write read invalid size ...passed 00:17:15.568 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:15.568 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:15.568 Test: blockdev write read max offset ...passed 00:17:15.568 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:15.568 Test: blockdev writev readv 8 blocks ...passed 00:17:15.568 Test: blockdev writev readv 30 x 1block ...passed 00:17:15.568 Test: blockdev writev readv block ...passed 00:17:15.568 Test: blockdev writev readv size > 128k ...passed 00:17:15.568 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:15.568 Test: blockdev comparev and writev ...[2024-07-10 12:19:24.965035] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:17:15.568 separate metadata which is not supported yet. 00:17:15.568 passed 00:17:15.568 Test: blockdev nvme passthru rw ...passed 00:17:15.568 Test: blockdev nvme passthru vendor specific ...[2024-07-10 12:19:24.965677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:17:15.568 [2024-07-10 12:19:24.965747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:17:15.568 passed 00:17:15.568 Test: blockdev nvme admin passthru ...passed 00:17:15.568 Test: blockdev copy ...passed 00:17:15.568 00:17:15.568 Run Summary: Type Total Ran Passed Failed Inactive 00:17:15.568 suites 6 6 n/a 0 0 00:17:15.568 tests 138 138 138 0 0 00:17:15.568 asserts 893 893 893 0 n/a 00:17:15.568 00:17:15.568 Elapsed time = 1.577 seconds 00:17:15.568 0 00:17:15.568 12:19:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 66833 00:17:15.568 12:19:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 66833 ']' 00:17:15.568 12:19:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 66833 00:17:15.568 12:19:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:17:15.568 12:19:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:15.568 12:19:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66833 00:17:15.568 killing process with pid 66833 00:17:15.568 12:19:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:15.568 12:19:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:15.568 12:19:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66833' 00:17:15.568 12:19:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 66833 00:17:15.568 12:19:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 66833 00:17:16.946 12:19:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:17:16.946 00:17:16.946 real 0m3.123s 00:17:16.946 user 0m7.523s 00:17:16.946 sys 0m0.446s 00:17:16.946 12:19:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:16.946 ************************************ 00:17:16.946 END TEST bdev_bounds 00:17:16.946 ************************************ 00:17:16.946 12:19:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:16.946 12:19:26 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:17:16.946 12:19:26 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:17:16.946 12:19:26 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:16.946 12:19:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:16.946 12:19:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.946 ************************************ 00:17:16.946 START TEST bdev_nbd 00:17:16.946 ************************************ 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=66898 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 66898 /var/tmp/spdk-nbd.sock 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 66898 ']' 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:16.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.946 12:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:16.946 [2024-07-10 12:19:26.374700] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:16.946 [2024-07-10 12:19:26.375046] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.205 [2024-07-10 12:19:26.549951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.465 [2024-07-10 12:19:26.799606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:18.428 1+0 records in 00:17:18.428 1+0 records out 00:17:18.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000890241 s, 4.6 MB/s 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:18.428 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:18.687 1+0 records in 00:17:18.687 1+0 records out 00:17:18.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000975478 s, 4.2 MB/s 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:18.687 12:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:18.946 1+0 records in 00:17:18.946 1+0 records out 00:17:18.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670202 s, 6.1 MB/s 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:18.946 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.206 1+0 records in 00:17:19.206 1+0 records out 00:17:19.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560116 s, 7.3 MB/s 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.206 1+0 records in 00:17:19.206 1+0 records out 00:17:19.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000861529 s, 4.8 MB/s 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:19.206 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.466 1+0 records in 00:17:19.466 1+0 records out 00:17:19.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000794259 s, 5.2 MB/s 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:19.466 12:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:19.725 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:19.725 { 00:17:19.725 "nbd_device": "/dev/nbd0", 00:17:19.725 "bdev_name": "Nvme0n1" 00:17:19.725 }, 00:17:19.725 { 00:17:19.725 "nbd_device": "/dev/nbd1", 00:17:19.725 "bdev_name": "Nvme1n1" 00:17:19.725 }, 00:17:19.725 { 00:17:19.725 "nbd_device": "/dev/nbd2", 00:17:19.725 "bdev_name": "Nvme2n1" 00:17:19.725 }, 00:17:19.725 { 00:17:19.725 "nbd_device": "/dev/nbd3", 00:17:19.725 "bdev_name": "Nvme2n2" 00:17:19.725 }, 00:17:19.725 { 00:17:19.725 "nbd_device": "/dev/nbd4", 00:17:19.725 "bdev_name": "Nvme2n3" 00:17:19.725 }, 00:17:19.725 { 00:17:19.725 "nbd_device": "/dev/nbd5", 00:17:19.725 "bdev_name": "Nvme3n1" 00:17:19.725 } 00:17:19.725 ]' 00:17:19.725 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:19.725 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:19.725 { 00:17:19.725 "nbd_device": "/dev/nbd0", 00:17:19.725 "bdev_name": "Nvme0n1" 00:17:19.725 }, 00:17:19.725 { 00:17:19.725 "nbd_device": "/dev/nbd1", 00:17:19.725 "bdev_name": "Nvme1n1" 00:17:19.725 }, 00:17:19.725 { 00:17:19.725 "nbd_device": "/dev/nbd2", 00:17:19.725 "bdev_name": "Nvme2n1" 00:17:19.725 }, 00:17:19.725 { 00:17:19.725 "nbd_device": "/dev/nbd3", 00:17:19.725 "bdev_name": "Nvme2n2" 00:17:19.725 }, 00:17:19.725 { 00:17:19.725 "nbd_device": "/dev/nbd4", 00:17:19.725 "bdev_name": "Nvme2n3" 00:17:19.725 }, 00:17:19.725 { 00:17:19.725 "nbd_device": "/dev/nbd5", 00:17:19.725 "bdev_name": "Nvme3n1" 00:17:19.725 } 00:17:19.725 ]' 00:17:19.725 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:19.725 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:19.725 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:19.725 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:19.725 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:19.725 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:19.725 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:19.725 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:19.984 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:19.984 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:19.984 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:19.984 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:19.984 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:19.984 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:19.984 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:19.984 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:19.984 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:19.984 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:20.243 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:20.243 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:20.244 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:20.244 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.244 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.244 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:20.244 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:20.244 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.244 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.244 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.503 12:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:20.762 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:20.762 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:20.762 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:20.762 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.762 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.762 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:20.762 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:20.762 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.762 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.762 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:21.021 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:21.021 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:21.021 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:21.021 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:21.021 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:21.021 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:21.021 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:21.021 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:21.021 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:21.021 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.021 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:21.281 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:17:21.540 /dev/nbd0 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:21.540 1+0 records in 00:17:21.540 1+0 records out 00:17:21.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604113 s, 6.8 MB/s 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:21.540 12:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:17:21.799 /dev/nbd1 00:17:21.799 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:21.799 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:21.799 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:17:21.799 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:21.799 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:21.799 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:21.799 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:17:21.800 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:21.800 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:21.800 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:21.800 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:21.800 1+0 records in 00:17:21.800 1+0 records out 00:17:21.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600248 s, 6.8 MB/s 00:17:21.800 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.800 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:21.800 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:21.800 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:21.800 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:21.800 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:21.800 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:21.800 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:17:21.800 /dev/nbd10 00:17:22.058 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:22.058 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:22.058 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:17:22.058 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:22.058 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:22.058 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:22.058 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:17:22.058 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:22.058 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:22.058 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:22.058 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.058 1+0 records in 00:17:22.059 1+0 records out 00:17:22.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546542 s, 7.5 MB/s 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:17:22.059 /dev/nbd11 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.059 1+0 records in 00:17:22.059 1+0 records out 00:17:22.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516655 s, 7.9 MB/s 00:17:22.059 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:17:22.317 /dev/nbd12 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.317 1+0 records in 00:17:22.317 1+0 records out 00:17:22.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733893 s, 5.6 MB/s 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:22.317 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:17:22.576 /dev/nbd13 00:17:22.576 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:22.576 12:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:22.576 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:17:22.576 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:17:22.576 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:22.576 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:22.576 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:17:22.576 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:17:22.576 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:22.576 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:22.576 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.576 1+0 records in 00:17:22.576 1+0 records out 00:17:22.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661577 s, 6.2 MB/s 00:17:22.576 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.576 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:17:22.576 12:19:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.576 12:19:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:22.576 12:19:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:17:22.576 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:22.576 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:22.577 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:22.577 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.577 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:22.836 { 00:17:22.836 "nbd_device": "/dev/nbd0", 00:17:22.836 "bdev_name": "Nvme0n1" 00:17:22.836 }, 00:17:22.836 { 00:17:22.836 "nbd_device": "/dev/nbd1", 00:17:22.836 "bdev_name": "Nvme1n1" 00:17:22.836 }, 00:17:22.836 { 00:17:22.836 "nbd_device": "/dev/nbd10", 00:17:22.836 "bdev_name": "Nvme2n1" 00:17:22.836 }, 00:17:22.836 { 00:17:22.836 "nbd_device": "/dev/nbd11", 00:17:22.836 "bdev_name": "Nvme2n2" 00:17:22.836 }, 00:17:22.836 { 00:17:22.836 "nbd_device": "/dev/nbd12", 00:17:22.836 "bdev_name": "Nvme2n3" 00:17:22.836 }, 00:17:22.836 { 00:17:22.836 "nbd_device": "/dev/nbd13", 00:17:22.836 "bdev_name": "Nvme3n1" 00:17:22.836 } 00:17:22.836 ]' 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:22.836 { 00:17:22.836 "nbd_device": "/dev/nbd0", 00:17:22.836 "bdev_name": "Nvme0n1" 00:17:22.836 }, 00:17:22.836 { 00:17:22.836 "nbd_device": "/dev/nbd1", 00:17:22.836 "bdev_name": "Nvme1n1" 00:17:22.836 }, 00:17:22.836 { 00:17:22.836 "nbd_device": "/dev/nbd10", 00:17:22.836 "bdev_name": "Nvme2n1" 00:17:22.836 }, 00:17:22.836 { 00:17:22.836 "nbd_device": "/dev/nbd11", 00:17:22.836 "bdev_name": "Nvme2n2" 00:17:22.836 }, 00:17:22.836 { 00:17:22.836 "nbd_device": "/dev/nbd12", 00:17:22.836 "bdev_name": "Nvme2n3" 00:17:22.836 }, 00:17:22.836 { 00:17:22.836 "nbd_device": "/dev/nbd13", 00:17:22.836 "bdev_name": "Nvme3n1" 00:17:22.836 } 00:17:22.836 ]' 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:22.836 /dev/nbd1 00:17:22.836 /dev/nbd10 00:17:22.836 /dev/nbd11 00:17:22.836 /dev/nbd12 00:17:22.836 /dev/nbd13' 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:22.836 /dev/nbd1 00:17:22.836 /dev/nbd10 00:17:22.836 /dev/nbd11 00:17:22.836 /dev/nbd12 00:17:22.836 /dev/nbd13' 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:22.836 256+0 records in 00:17:22.836 256+0 records out 00:17:22.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128473 s, 81.6 MB/s 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:22.836 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:23.096 256+0 records in 00:17:23.096 256+0 records out 00:17:23.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121814 s, 8.6 MB/s 00:17:23.096 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:23.096 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:23.096 256+0 records in 00:17:23.096 256+0 records out 00:17:23.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125177 s, 8.4 MB/s 00:17:23.096 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:23.096 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:23.355 256+0 records in 00:17:23.355 256+0 records out 00:17:23.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128059 s, 8.2 MB/s 00:17:23.355 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:23.355 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:23.355 256+0 records in 00:17:23.355 256+0 records out 00:17:23.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125368 s, 8.4 MB/s 00:17:23.355 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:23.355 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:23.614 256+0 records in 00:17:23.614 256+0 records out 00:17:23.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126196 s, 8.3 MB/s 00:17:23.615 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:23.615 12:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:23.615 256+0 records in 00:17:23.615 256+0 records out 00:17:23.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124496 s, 8.4 MB/s 00:17:23.615 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:23.615 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:23.615 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:23.615 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:23.615 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:23.615 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:23.615 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:23.615 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:23.615 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:23.615 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:23.615 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:23.615 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:23.615 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.875 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:24.134 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:24.134 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:24.134 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:24.134 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.134 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.134 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:24.134 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:24.134 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.134 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.134 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:24.394 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:24.394 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:24.394 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:24.394 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.394 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.394 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:24.394 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:24.394 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.394 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.394 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:24.653 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:24.653 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:24.653 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:24.653 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.653 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.653 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:24.653 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:24.653 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.653 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.653 12:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:24.912 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:17:25.172 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:25.431 malloc_lvol_verify 00:17:25.432 12:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:25.690 5779a90f-807c-4065-b71d-0deb840821e7 00:17:25.690 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:25.948 3c10acf8-e4b5-4a94-af98-73a0fb475497 00:17:25.948 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:26.206 /dev/nbd0 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:17:26.206 mke2fs 1.46.5 (30-Dec-2021) 00:17:26.206 Discarding device blocks: 0/4096 done 00:17:26.206 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:26.206 00:17:26.206 Allocating group tables: 0/1 done 00:17:26.206 Writing inode tables: 0/1 done 00:17:26.206 Creating journal (1024 blocks): done 00:17:26.206 Writing superblocks and filesystem accounting information: 0/1 done 00:17:26.206 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 66898 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 66898 ']' 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 66898 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:26.206 12:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66898 00:17:26.464 12:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:26.464 12:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:26.464 killing process with pid 66898 00:17:26.464 12:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66898' 00:17:26.464 12:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 66898 00:17:26.464 12:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 66898 00:17:27.850 12:19:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:17:27.850 00:17:27.850 real 0m10.845s 00:17:27.850 user 0m13.825s 00:17:27.850 sys 0m4.270s 00:17:27.850 12:19:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:27.850 ************************************ 00:17:27.850 END TEST bdev_nbd 00:17:27.850 12:19:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:27.850 ************************************ 00:17:27.850 12:19:37 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:17:27.850 12:19:37 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:17:27.850 12:19:37 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:17:27.850 skipping fio tests on NVMe due to multi-ns failures. 00:17:27.850 12:19:37 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:17:27.850 12:19:37 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:27.850 12:19:37 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:27.850 12:19:37 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:17:27.850 12:19:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.850 12:19:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:27.850 ************************************ 00:17:27.850 START TEST bdev_verify 00:17:27.850 ************************************ 00:17:27.850 12:19:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:27.850 [2024-07-10 12:19:37.283329] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:27.850 [2024-07-10 12:19:37.283491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67284 ] 00:17:28.108 [2024-07-10 12:19:37.449609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:28.366 [2024-07-10 12:19:37.697869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.367 [2024-07-10 12:19:37.697912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.302 Running I/O for 5 seconds... 00:17:34.570 00:17:34.570 Latency(us) 00:17:34.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.570 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:34.570 Verification LBA range: start 0x0 length 0xbd0bd 00:17:34.570 Nvme0n1 : 5.06 1821.40 7.11 0.00 0.00 70113.92 14949.58 73273.99 00:17:34.570 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:34.570 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:34.570 Nvme0n1 : 5.06 1845.68 7.21 0.00 0.00 69184.94 14633.74 73695.10 00:17:34.570 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:34.570 Verification LBA range: start 0x0 length 0xa0000 00:17:34.571 Nvme1n1 : 5.06 1820.96 7.11 0.00 0.00 70010.28 16528.76 65693.92 00:17:34.571 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:34.571 Verification LBA range: start 0xa0000 length 0xa0000 00:17:34.571 Nvme1n1 : 5.07 1844.74 7.21 0.00 0.00 69090.41 16949.87 66536.15 00:17:34.571 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:34.571 Verification LBA range: start 0x0 length 0x80000 00:17:34.571 Nvme2n1 : 5.06 1820.43 7.11 0.00 0.00 69939.98 15265.41 61061.65 00:17:34.571 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:34.571 Verification LBA range: start 0x80000 length 0x80000 00:17:34.571 Nvme2n1 : 5.07 1843.88 7.20 0.00 0.00 68836.40 17265.71 60640.54 00:17:34.571 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:34.571 Verification LBA range: start 0x0 length 0x80000 00:17:34.571 Nvme2n2 : 5.07 1819.52 7.11 0.00 0.00 69775.35 15897.09 62325.00 00:17:34.571 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:34.571 Verification LBA range: start 0x80000 length 0x80000 00:17:34.571 Nvme2n2 : 5.07 1843.02 7.20 0.00 0.00 68702.55 17476.27 64009.46 00:17:34.571 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:34.571 Verification LBA range: start 0x0 length 0x80000 00:17:34.571 Nvme2n3 : 5.07 1818.67 7.10 0.00 0.00 69667.23 15897.09 66115.03 00:17:34.571 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:34.571 Verification LBA range: start 0x80000 length 0x80000 00:17:34.571 Nvme2n3 : 5.07 1842.20 7.20 0.00 0.00 68621.03 12264.97 66536.15 00:17:34.571 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:34.571 Verification LBA range: start 0x0 length 0x20000 00:17:34.571 Nvme3n1 : 5.07 1817.82 7.10 0.00 0.00 69587.01 15265.41 68641.72 00:17:34.571 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:34.571 Verification LBA range: start 0x20000 length 0x20000 00:17:34.571 Nvme3n1 : 5.08 1852.73 7.24 0.00 0.00 68220.79 3053.08 67378.38 00:17:34.571 =================================================================================================================== 00:17:34.571 Total : 21991.06 85.90 0.00 0.00 69308.17 3053.08 73695.10 00:17:35.949 00:17:35.949 real 0m8.018s 00:17:35.949 user 0m14.583s 00:17:35.949 sys 0m0.289s 00:17:35.949 12:19:45 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:35.949 12:19:45 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:35.949 ************************************ 00:17:35.949 END TEST bdev_verify 00:17:35.949 ************************************ 00:17:35.949 12:19:45 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:17:35.949 12:19:45 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:35.949 12:19:45 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:17:35.949 12:19:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.949 12:19:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:35.949 ************************************ 00:17:35.949 START TEST bdev_verify_big_io 00:17:35.949 ************************************ 00:17:35.949 12:19:45 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:35.949 [2024-07-10 12:19:45.370553] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:35.949 [2024-07-10 12:19:45.370772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67383 ] 00:17:36.208 [2024-07-10 12:19:45.543380] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:36.468 [2024-07-10 12:19:45.794357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.468 [2024-07-10 12:19:45.794391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.405 Running I/O for 5 seconds... 00:17:44.008 00:17:44.008 Latency(us) 00:17:44.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.008 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:44.008 Verification LBA range: start 0x0 length 0xbd0b 00:17:44.008 Nvme0n1 : 5.63 147.88 9.24 0.00 0.00 838724.20 25266.89 855705.39 00:17:44.008 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:44.008 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:44.008 Nvme0n1 : 5.63 147.83 9.24 0.00 0.00 839173.92 20634.63 869181.07 00:17:44.008 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:44.008 Verification LBA range: start 0x0 length 0xa000 00:17:44.008 Nvme1n1 : 5.71 153.51 9.59 0.00 0.00 793040.28 29267.48 731055.40 00:17:44.008 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:44.008 Verification LBA range: start 0xa000 length 0xa000 00:17:44.008 Nvme1n1 : 5.74 153.02 9.56 0.00 0.00 794195.80 73273.99 717579.72 00:17:44.008 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:44.008 Verification LBA range: start 0x0 length 0x8000 00:17:44.008 Nvme2n1 : 5.71 153.12 9.57 0.00 0.00 774183.05 29267.48 754637.83 00:17:44.009 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:44.009 Verification LBA range: start 0x8000 length 0x8000 00:17:44.009 Nvme2n1 : 5.75 152.12 9.51 0.00 0.00 775796.68 73273.99 663677.02 00:17:44.009 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:44.009 Verification LBA range: start 0x0 length 0x8000 00:17:44.009 Nvme2n2 : 5.71 153.60 9.60 0.00 0.00 751644.05 29056.93 771482.42 00:17:44.009 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:44.009 Verification LBA range: start 0x8000 length 0x8000 00:17:44.009 Nvme2n2 : 5.75 155.89 9.74 0.00 0.00 742143.38 38110.89 670414.86 00:17:44.009 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:44.009 Verification LBA range: start 0x0 length 0x8000 00:17:44.009 Nvme2n3 : 5.71 156.86 9.80 0.00 0.00 719370.40 47164.86 795064.85 00:17:44.009 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:44.009 Verification LBA range: start 0x8000 length 0x8000 00:17:44.009 Nvme2n3 : 5.77 159.91 9.99 0.00 0.00 706308.13 12422.89 1111743.23 00:17:44.009 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:44.009 Verification LBA range: start 0x0 length 0x2000 00:17:44.009 Nvme3n1 : 5.77 174.06 10.88 0.00 0.00 633769.90 1013.31 808540.53 00:17:44.009 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:44.009 Verification LBA range: start 0x2000 length 0x2000 00:17:44.009 Nvme3n1 : 5.78 163.12 10.19 0.00 0.00 673639.14 9001.33 1556440.52 00:17:44.009 =================================================================================================================== 00:17:44.009 Total : 1870.92 116.93 0.00 0.00 750580.65 1013.31 1556440.52 00:17:45.386 00:17:45.386 real 0m9.245s 00:17:45.386 user 0m16.982s 00:17:45.386 sys 0m0.326s 00:17:45.386 12:19:54 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:45.386 ************************************ 00:17:45.386 END TEST bdev_verify_big_io 00:17:45.386 12:19:54 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.386 ************************************ 00:17:45.386 12:19:54 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:17:45.386 12:19:54 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:45.386 12:19:54 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:45.386 12:19:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.386 12:19:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:45.386 ************************************ 00:17:45.386 START TEST bdev_write_zeroes 00:17:45.386 ************************************ 00:17:45.386 12:19:54 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:45.386 [2024-07-10 12:19:54.691611] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:45.386 [2024-07-10 12:19:54.691763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67504 ] 00:17:45.386 [2024-07-10 12:19:54.862303] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.646 [2024-07-10 12:19:55.107922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.584 Running I/O for 1 seconds... 00:17:47.518 00:17:47.518 Latency(us) 00:17:47.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.518 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:47.518 Nvme0n1 : 1.01 12233.19 47.79 0.00 0.00 10432.06 8685.49 30530.83 00:17:47.518 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:47.518 Nvme1n1 : 1.01 12246.76 47.84 0.00 0.00 10417.84 7422.15 31373.06 00:17:47.518 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:47.518 Nvme2n1 : 1.01 12233.99 47.79 0.00 0.00 10389.49 7843.26 31162.50 00:17:47.518 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:47.518 Nvme2n2 : 1.02 12254.02 47.87 0.00 0.00 10325.46 6553.60 26424.96 00:17:47.518 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:47.518 Nvme2n3 : 1.02 12272.75 47.94 0.00 0.00 10273.54 5342.89 22845.48 00:17:47.518 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:47.518 Nvme3n1 : 1.02 12261.34 47.90 0.00 0.00 10249.29 5395.53 21476.86 00:17:47.518 =================================================================================================================== 00:17:47.518 Total : 73502.06 287.12 0.00 0.00 10347.56 5342.89 31373.06 00:17:48.895 00:17:48.895 real 0m3.619s 00:17:48.895 user 0m3.223s 00:17:48.895 sys 0m0.280s 00:17:48.895 12:19:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:48.895 12:19:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:48.895 ************************************ 00:17:48.895 END TEST bdev_write_zeroes 00:17:48.895 ************************************ 00:17:48.895 12:19:58 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:17:48.895 12:19:58 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:48.895 12:19:58 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:48.895 12:19:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:48.895 12:19:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:48.895 ************************************ 00:17:48.895 START TEST bdev_json_nonenclosed 00:17:48.895 ************************************ 00:17:48.895 12:19:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:49.155 [2024-07-10 12:19:58.377022] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:49.155 [2024-07-10 12:19:58.377166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67562 ] 00:17:49.155 [2024-07-10 12:19:58.547038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.414 [2024-07-10 12:19:58.795089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.414 [2024-07-10 12:19:58.795179] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:49.414 [2024-07-10 12:19:58.795199] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:49.414 [2024-07-10 12:19:58.795215] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:49.982 00:17:49.982 real 0m0.979s 00:17:49.982 user 0m0.716s 00:17:49.982 sys 0m0.157s 00:17:49.982 12:19:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:17:49.982 ************************************ 00:17:49.982 END TEST bdev_json_nonenclosed 00:17:49.982 ************************************ 00:17:49.982 12:19:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:49.982 12:19:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:49.982 12:19:59 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:17:49.982 12:19:59 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:17:49.982 12:19:59 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:49.982 12:19:59 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:17:49.982 12:19:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.982 12:19:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:49.982 ************************************ 00:17:49.982 START TEST bdev_json_nonarray 00:17:49.982 ************************************ 00:17:49.982 12:19:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:49.982 [2024-07-10 12:19:59.431325] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:49.982 [2024-07-10 12:19:59.431457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67601 ] 00:17:50.242 [2024-07-10 12:19:59.601627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.500 [2024-07-10 12:19:59.837497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.500 [2024-07-10 12:19:59.837615] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:50.500 [2024-07-10 12:19:59.837635] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:50.500 [2024-07-10 12:19:59.837651] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:51.068 00:17:51.068 real 0m0.974s 00:17:51.068 user 0m0.709s 00:17:51.068 sys 0m0.158s 00:17:51.068 12:20:00 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:17:51.068 12:20:00 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.068 12:20:00 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:51.068 ************************************ 00:17:51.068 END TEST bdev_json_nonarray 00:17:51.068 ************************************ 00:17:51.068 12:20:00 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:17:51.068 12:20:00 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:17:51.068 12:20:00 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:17:51.068 12:20:00 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:17:51.068 12:20:00 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:17:51.068 12:20:00 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:17:51.068 12:20:00 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:17:51.068 12:20:00 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:51.068 12:20:00 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:51.068 12:20:00 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:17:51.068 12:20:00 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:17:51.068 12:20:00 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:17:51.068 12:20:00 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:17:51.068 00:17:51.068 real 0m44.596s 00:17:51.068 user 1m4.466s 00:17:51.068 sys 0m7.381s 00:17:51.068 12:20:00 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.068 12:20:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:17:51.068 ************************************ 00:17:51.068 END TEST blockdev_nvme 00:17:51.068 ************************************ 00:17:51.068 12:20:00 -- common/autotest_common.sh@1142 -- # return 0 00:17:51.068 12:20:00 -- spdk/autotest.sh@213 -- # uname -s 00:17:51.068 12:20:00 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:17:51.068 12:20:00 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:17:51.068 12:20:00 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:51.068 12:20:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.068 12:20:00 -- common/autotest_common.sh@10 -- # set +x 00:17:51.068 ************************************ 00:17:51.068 START TEST blockdev_nvme_gpt 00:17:51.068 ************************************ 00:17:51.068 12:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:17:51.327 * Looking for test storage... 00:17:51.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:17:51.327 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:17:51.328 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:17:51.328 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:17:51.328 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:17:51.328 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:17:51.328 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67678 00:17:51.328 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:51.328 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 67678 00:17:51.328 12:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:51.328 12:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 67678 ']' 00:17:51.328 12:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.328 12:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.328 12:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.328 12:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.328 12:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:17:51.328 [2024-07-10 12:20:00.712385] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:17:51.328 [2024-07-10 12:20:00.712529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67678 ] 00:17:51.587 [2024-07-10 12:20:00.882165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.845 [2024-07-10 12:20:01.131927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.781 12:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.781 12:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:17:52.781 12:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:17:52.781 12:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:17:52.781 12:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:53.349 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:53.349 Waiting for block devices as requested 00:17:53.608 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:53.608 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:53.868 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:53.868 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:59.178 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:17:59.178 BYT; 00:17:59.178 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:17:59.178 BYT; 00:17:59.178 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:17:59.178 12:20:08 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:17:59.178 12:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:18:00.115 The operation has completed successfully. 00:18:00.115 12:20:09 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:18:01.050 The operation has completed successfully. 00:18:01.050 12:20:10 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:01.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:02.552 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.552 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.552 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.552 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:02.811 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:18:02.811 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.811 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:02.812 [] 00:18:02.812 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.812 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:18:02.812 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:18:02.812 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:18:02.812 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:02.812 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:18:02.812 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.812 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.071 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.071 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:18:03.071 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.071 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.071 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.071 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:18:03.071 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.071 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:03.071 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:18:03.330 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.330 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:18:03.330 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:18:03.330 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "daefb02c-00f5-470a-bd1d-6dcfb9554153"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "daefb02c-00f5-470a-bd1d-6dcfb9554153",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5085db14-50a6-4923-9376-e1430e1cc3e4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5085db14-50a6-4923-9376-e1430e1cc3e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f56c8b6b-577a-41ce-97be-7f970f6826b6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f56c8b6b-577a-41ce-97be-7f970f6826b6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "69397862-bde8-4086-b821-9531938fcd1c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "69397862-bde8-4086-b821-9531938fcd1c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "640647c8-a1bb-42de-b257-2f4d68579a1a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "640647c8-a1bb-42de-b257-2f4d68579a1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:18:03.330 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:18:03.330 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:18:03.330 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:18:03.330 12:20:12 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 67678 00:18:03.330 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 67678 ']' 00:18:03.330 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 67678 00:18:03.330 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:18:03.330 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.330 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67678 00:18:03.330 killing process with pid 67678 00:18:03.330 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:03.330 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:03.330 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67678' 00:18:03.330 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 67678 00:18:03.330 12:20:12 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 67678 00:18:05.869 12:20:15 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:05.869 12:20:15 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:18:05.869 12:20:15 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:05.869 12:20:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:05.869 12:20:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:05.869 ************************************ 00:18:05.869 START TEST bdev_hello_world 00:18:05.869 ************************************ 00:18:05.869 12:20:15 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:18:05.869 [2024-07-10 12:20:15.337526] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:18:05.869 [2024-07-10 12:20:15.337662] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68332 ] 00:18:06.127 [2024-07-10 12:20:15.513315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.386 [2024-07-10 12:20:15.756670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.322 [2024-07-10 12:20:16.441591] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:07.322 [2024-07-10 12:20:16.441652] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:18:07.322 [2024-07-10 12:20:16.441676] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:07.322 [2024-07-10 12:20:16.444582] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:07.322 [2024-07-10 12:20:16.445185] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:07.322 [2024-07-10 12:20:16.445220] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:07.322 [2024-07-10 12:20:16.445467] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:07.322 00:18:07.322 [2024-07-10 12:20:16.445494] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:08.700 00:18:08.700 real 0m2.516s 00:18:08.700 user 0m2.143s 00:18:08.700 sys 0m0.264s 00:18:08.700 12:20:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:08.700 12:20:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:08.700 ************************************ 00:18:08.700 END TEST bdev_hello_world 00:18:08.700 ************************************ 00:18:08.701 12:20:17 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:18:08.701 12:20:17 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:18:08.701 12:20:17 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:08.701 12:20:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.701 12:20:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:08.701 ************************************ 00:18:08.701 START TEST bdev_bounds 00:18:08.701 ************************************ 00:18:08.701 12:20:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:18:08.701 Process bdevio pid: 68374 00:18:08.701 12:20:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=68374 00:18:08.701 12:20:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:08.701 12:20:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:08.701 12:20:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68374' 00:18:08.701 12:20:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 68374 00:18:08.701 12:20:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 68374 ']' 00:18:08.701 12:20:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.701 12:20:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.701 12:20:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.701 12:20:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.701 12:20:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:08.701 [2024-07-10 12:20:17.927117] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:18:08.701 [2024-07-10 12:20:17.927247] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68374 ] 00:18:08.701 [2024-07-10 12:20:18.098544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:08.959 [2024-07-10 12:20:18.346307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.959 [2024-07-10 12:20:18.346440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.959 [2024-07-10 12:20:18.346494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.897 12:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.897 12:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:18:09.897 12:20:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:09.897 I/O targets: 00:18:09.897 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:18:09.897 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:18:09.897 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:18:09.897 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:09.897 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:09.897 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:09.897 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:18:09.897 00:18:09.897 00:18:09.897 CUnit - A unit testing framework for C - Version 2.1-3 00:18:09.897 http://cunit.sourceforge.net/ 00:18:09.897 00:18:09.897 00:18:09.897 Suite: bdevio tests on: Nvme3n1 00:18:09.897 Test: blockdev write read block ...passed 00:18:09.897 Test: blockdev write zeroes read block ...passed 00:18:09.897 Test: blockdev write zeroes read no split ...passed 00:18:09.897 Test: blockdev write zeroes read split ...passed 00:18:09.897 Test: blockdev write zeroes read split partial ...passed 00:18:09.897 Test: blockdev reset ...[2024-07-10 12:20:19.261928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:18:09.897 [2024-07-10 12:20:19.266249] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:09.897 passed 00:18:09.897 Test: blockdev write read 8 blocks ...passed 00:18:09.897 Test: blockdev write read size > 128k ...passed 00:18:09.897 Test: blockdev write read invalid size ...passed 00:18:09.897 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:09.897 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:09.897 Test: blockdev write read max offset ...passed 00:18:09.897 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:09.897 Test: blockdev writev readv 8 blocks ...passed 00:18:09.897 Test: blockdev writev readv 30 x 1block ...passed 00:18:09.897 Test: blockdev writev readv block ...passed 00:18:09.897 Test: blockdev writev readv size > 128k ...passed 00:18:09.897 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:09.897 Test: blockdev comparev and writev ...[2024-07-10 12:20:19.277256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26a604000 len:0x1000 00:18:09.897 passed 00:18:09.897 Test: blockdev nvme passthru rw ...[2024-07-10 12:20:19.277508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:09.897 passed 00:18:09.897 Test: blockdev nvme passthru vendor specific ...passed 00:18:09.897 Test: blockdev nvme admin passthru ...[2024-07-10 12:20:19.278528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:09.897 [2024-07-10 12:20:19.278653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:09.897 passed 00:18:09.897 Test: blockdev copy ...passed 00:18:09.897 Suite: bdevio tests on: Nvme2n3 00:18:09.897 Test: blockdev write read block ...passed 00:18:09.897 Test: blockdev write zeroes read block ...passed 00:18:09.897 Test: blockdev write zeroes read no split ...passed 00:18:09.897 Test: blockdev write zeroes read split ...passed 00:18:09.897 Test: blockdev write zeroes read split partial ...passed 00:18:09.897 Test: blockdev reset ...[2024-07-10 12:20:19.364230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:18:09.897 [2024-07-10 12:20:19.368978] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:09.897 passed 00:18:09.897 Test: blockdev write read 8 blocks ...passed 00:18:09.897 Test: blockdev write read size > 128k ...passed 00:18:09.897 Test: blockdev write read invalid size ...passed 00:18:09.897 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:09.897 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:09.897 Test: blockdev write read max offset ...passed 00:18:09.897 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:09.897 Test: blockdev writev readv 8 blocks ...passed 00:18:09.897 Test: blockdev writev readv 30 x 1block ...passed 00:18:09.897 Test: blockdev writev readv block ...passed 00:18:09.897 Test: blockdev writev readv size > 128k ...passed 00:18:10.156 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:10.156 Test: blockdev comparev and writev ...[2024-07-10 12:20:19.379576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283a3a000 len:0x1000 00:18:10.156 [2024-07-10 12:20:19.379888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:10.156 passed 00:18:10.156 Test: blockdev nvme passthru rw ...passed 00:18:10.156 Test: blockdev nvme passthru vendor specific ...[2024-07-10 12:20:19.381095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:10.156 [2024-07-10 12:20:19.381309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:10.156 passed 00:18:10.156 Test: blockdev nvme admin passthru ...passed 00:18:10.156 Test: blockdev copy ...passed 00:18:10.156 Suite: bdevio tests on: Nvme2n2 00:18:10.156 Test: blockdev write read block ...passed 00:18:10.156 Test: blockdev write zeroes read block ...passed 00:18:10.156 Test: blockdev write zeroes read no split ...passed 00:18:10.157 Test: blockdev write zeroes read split ...passed 00:18:10.157 Test: blockdev write zeroes read split partial ...passed 00:18:10.157 Test: blockdev reset ...[2024-07-10 12:20:19.464743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:18:10.157 [2024-07-10 12:20:19.468866] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:10.157 passed 00:18:10.157 Test: blockdev write read 8 blocks ...passed 00:18:10.157 Test: blockdev write read size > 128k ...passed 00:18:10.157 Test: blockdev write read invalid size ...passed 00:18:10.157 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:10.157 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:10.157 Test: blockdev write read max offset ...passed 00:18:10.157 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:10.157 Test: blockdev writev readv 8 blocks ...passed 00:18:10.157 Test: blockdev writev readv 30 x 1block ...passed 00:18:10.157 Test: blockdev writev readv block ...passed 00:18:10.157 Test: blockdev writev readv size > 128k ...passed 00:18:10.157 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:10.157 Test: blockdev comparev and writev ...[2024-07-10 12:20:19.479169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283a36000 len:0x1000 00:18:10.157 [2024-07-10 12:20:19.479422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:10.157 passed 00:18:10.157 Test: blockdev nvme passthru rw ...passed 00:18:10.157 Test: blockdev nvme passthru vendor specific ...[2024-07-10 12:20:19.480930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:10.157 [2024-07-10 12:20:19.481140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:10.157 passed 00:18:10.157 Test: blockdev nvme admin passthru ...passed 00:18:10.157 Test: blockdev copy ...passed 00:18:10.157 Suite: bdevio tests on: Nvme2n1 00:18:10.157 Test: blockdev write read block ...passed 00:18:10.157 Test: blockdev write zeroes read block ...passed 00:18:10.157 Test: blockdev write zeroes read no split ...passed 00:18:10.157 Test: blockdev write zeroes read split ...passed 00:18:10.157 Test: blockdev write zeroes read split partial ...passed 00:18:10.157 Test: blockdev reset ...[2024-07-10 12:20:19.565282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:18:10.157 [2024-07-10 12:20:19.569561] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:10.157 passed 00:18:10.157 Test: blockdev write read 8 blocks ...passed 00:18:10.157 Test: blockdev write read size > 128k ...passed 00:18:10.157 Test: blockdev write read invalid size ...passed 00:18:10.157 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:10.157 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:10.157 Test: blockdev write read max offset ...passed 00:18:10.157 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:10.157 Test: blockdev writev readv 8 blocks ...passed 00:18:10.157 Test: blockdev writev readv 30 x 1block ...passed 00:18:10.157 Test: blockdev writev readv block ...passed 00:18:10.157 Test: blockdev writev readv size > 128k ...passed 00:18:10.157 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:10.157 Test: blockdev comparev and writev ...[2024-07-10 12:20:19.580121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x283a30000 len:0x1000 00:18:10.157 [2024-07-10 12:20:19.580402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:10.157 passed 00:18:10.157 Test: blockdev nvme passthru rw ...passed 00:18:10.157 Test: blockdev nvme passthru vendor specific ...[2024-07-10 12:20:19.581649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:10.157 [2024-07-10 12:20:19.581873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:10.157 passed 00:18:10.157 Test: blockdev nvme admin passthru ...passed 00:18:10.157 Test: blockdev copy ...passed 00:18:10.157 Suite: bdevio tests on: Nvme1n1 00:18:10.157 Test: blockdev write read block ...passed 00:18:10.157 Test: blockdev write zeroes read block ...passed 00:18:10.157 Test: blockdev write zeroes read no split ...passed 00:18:10.157 Test: blockdev write zeroes read split ...passed 00:18:10.416 Test: blockdev write zeroes read split partial ...passed 00:18:10.416 Test: blockdev reset ...[2024-07-10 12:20:19.665174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:18:10.416 [2024-07-10 12:20:19.669394] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:10.416 passed 00:18:10.416 Test: blockdev write read 8 blocks ...passed 00:18:10.416 Test: blockdev write read size > 128k ...passed 00:18:10.416 Test: blockdev write read invalid size ...passed 00:18:10.416 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:10.416 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:10.416 Test: blockdev write read max offset ...passed 00:18:10.416 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:10.416 Test: blockdev writev readv 8 blocks ...passed 00:18:10.416 Test: blockdev writev readv 30 x 1block ...passed 00:18:10.416 Test: blockdev writev readv block ...passed 00:18:10.416 Test: blockdev writev readv size > 128k ...passed 00:18:10.416 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:10.416 Test: blockdev comparev and writev ...[2024-07-10 12:20:19.679307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27220e000 len:0x1000 00:18:10.416 passed 00:18:10.416 Test: blockdev nvme passthru rw ...[2024-07-10 12:20:19.679586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:10.416 passed 00:18:10.416 Test: blockdev nvme passthru vendor specific ...[2024-07-10 12:20:19.680528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:10.416 passed 00:18:10.416 Test: blockdev nvme admin passthru ...[2024-07-10 12:20:19.680793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:10.416 passed 00:18:10.416 Test: blockdev copy ...passed 00:18:10.416 Suite: bdevio tests on: Nvme0n1p2 00:18:10.416 Test: blockdev write read block ...passed 00:18:10.416 Test: blockdev write zeroes read block ...passed 00:18:10.416 Test: blockdev write zeroes read no split ...passed 00:18:10.416 Test: blockdev write zeroes read split ...passed 00:18:10.416 Test: blockdev write zeroes read split partial ...passed 00:18:10.416 Test: blockdev reset ...[2024-07-10 12:20:19.763411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:18:10.416 [2024-07-10 12:20:19.767482] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:10.416 passed 00:18:10.416 Test: blockdev write read 8 blocks ...passed 00:18:10.416 Test: blockdev write read size > 128k ...passed 00:18:10.416 Test: blockdev write read invalid size ...passed 00:18:10.416 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:10.416 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:10.416 Test: blockdev write read max offset ...passed 00:18:10.416 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:10.416 Test: blockdev writev readv 8 blocks ...passed 00:18:10.416 Test: blockdev writev readv 30 x 1block ...passed 00:18:10.416 Test: blockdev writev readv block ...passed 00:18:10.417 Test: blockdev writev readv size > 128k ...passed 00:18:10.417 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:10.417 Test: blockdev comparev and writev ...passed 00:18:10.417 Test: blockdev nvme passthru rw ...passed 00:18:10.417 Test: blockdev nvme passthru vendor specific ...passed 00:18:10.417 Test: blockdev nvme admin passthru ...passed 00:18:10.417 Test: blockdev copy ...[2024-07-10 12:20:19.776444] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:18:10.417 separate metadata which is not supported yet. 00:18:10.417 passed 00:18:10.417 Suite: bdevio tests on: Nvme0n1p1 00:18:10.417 Test: blockdev write read block ...passed 00:18:10.417 Test: blockdev write zeroes read block ...passed 00:18:10.417 Test: blockdev write zeroes read no split ...passed 00:18:10.417 Test: blockdev write zeroes read split ...passed 00:18:10.417 Test: blockdev write zeroes read split partial ...passed 00:18:10.417 Test: blockdev reset ...[2024-07-10 12:20:19.851348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:18:10.417 [2024-07-10 12:20:19.855068] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:10.417 passed 00:18:10.417 Test: blockdev write read 8 blocks ...passed 00:18:10.417 Test: blockdev write read size > 128k ...passed 00:18:10.417 Test: blockdev write read invalid size ...passed 00:18:10.417 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:10.417 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:10.417 Test: blockdev write read max offset ...passed 00:18:10.417 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:10.417 Test: blockdev writev readv 8 blocks ...passed 00:18:10.417 Test: blockdev writev readv 30 x 1block ...passed 00:18:10.417 Test: blockdev writev readv block ...passed 00:18:10.417 Test: blockdev writev readv size > 128k ...passed 00:18:10.417 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:10.417 Test: blockdev comparev and writev ...passed 00:18:10.417 Test: blockdev nvme passthru rw ...passed 00:18:10.417 Test: blockdev nvme passthru vendor specific ...passed 00:18:10.417 Test: blockdev nvme admin passthru ...passed 00:18:10.417 Test: blockdev copy ...[2024-07-10 12:20:19.863781] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:18:10.417 separate metadata which is not supported yet. 00:18:10.417 passed 00:18:10.417 00:18:10.417 Run Summary: Type Total Ran Passed Failed Inactive 00:18:10.417 suites 7 7 n/a 0 0 00:18:10.417 tests 161 161 161 0 0 00:18:10.417 asserts 1006 1006 1006 0 n/a 00:18:10.417 00:18:10.417 Elapsed time = 1.862 seconds 00:18:10.417 0 00:18:10.417 12:20:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 68374 00:18:10.417 12:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 68374 ']' 00:18:10.417 12:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 68374 00:18:10.417 12:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:18:10.675 12:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.675 12:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68374 00:18:10.675 12:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:10.675 12:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:10.675 killing process with pid 68374 00:18:10.675 12:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68374' 00:18:10.675 12:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 68374 00:18:10.675 12:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 68374 00:18:11.609 12:20:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:18:11.609 00:18:11.609 real 0m3.222s 00:18:11.609 user 0m7.841s 00:18:11.609 sys 0m0.422s 00:18:11.609 12:20:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.609 12:20:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:11.609 ************************************ 00:18:11.609 END TEST bdev_bounds 00:18:11.609 ************************************ 00:18:11.869 12:20:21 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:18:11.869 12:20:21 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:18:11.869 12:20:21 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:11.869 12:20:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:11.869 12:20:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:11.869 ************************************ 00:18:11.869 START TEST bdev_nbd 00:18:11.869 ************************************ 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:18:11.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=68439 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 68439 /var/tmp/spdk-nbd.sock 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 68439 ']' 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:11.869 12:20:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:11.869 [2024-07-10 12:20:21.230326] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:18:11.869 [2024-07-10 12:20:21.230682] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.129 [2024-07-10 12:20:21.404045] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.388 [2024-07-10 12:20:21.687457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:12.958 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:13.217 1+0 records in 00:18:13.217 1+0 records out 00:18:13.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543047 s, 7.5 MB/s 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:13.217 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:13.477 1+0 records in 00:18:13.477 1+0 records out 00:18:13.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635985 s, 6.4 MB/s 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:13.477 12:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:13.737 1+0 records in 00:18:13.737 1+0 records out 00:18:13.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644615 s, 6.4 MB/s 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:13.737 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:13.997 1+0 records in 00:18:13.997 1+0 records out 00:18:13.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728866 s, 5.6 MB/s 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:13.997 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:14.256 1+0 records in 00:18:14.256 1+0 records out 00:18:14.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000881693 s, 4.6 MB/s 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:14.256 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:14.515 1+0 records in 00:18:14.515 1+0 records out 00:18:14.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103855 s, 3.9 MB/s 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:14.515 12:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:18:14.774 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:18:14.774 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:18:14.774 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:18:14.774 12:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:18:14.774 12:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:14.774 12:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:14.774 12:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:14.774 12:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:18:14.774 12:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:14.775 12:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:14.775 12:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:14.775 12:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:14.775 1+0 records in 00:18:14.775 1+0 records out 00:18:14.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595634 s, 6.9 MB/s 00:18:14.775 12:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.775 12:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:14.775 12:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.775 12:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:14.775 12:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:14.775 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:14.775 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:18:14.775 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:15.034 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:15.034 { 00:18:15.034 "nbd_device": "/dev/nbd0", 00:18:15.034 "bdev_name": "Nvme0n1p1" 00:18:15.034 }, 00:18:15.034 { 00:18:15.034 "nbd_device": "/dev/nbd1", 00:18:15.034 "bdev_name": "Nvme0n1p2" 00:18:15.034 }, 00:18:15.034 { 00:18:15.034 "nbd_device": "/dev/nbd2", 00:18:15.034 "bdev_name": "Nvme1n1" 00:18:15.034 }, 00:18:15.034 { 00:18:15.034 "nbd_device": "/dev/nbd3", 00:18:15.034 "bdev_name": "Nvme2n1" 00:18:15.034 }, 00:18:15.034 { 00:18:15.034 "nbd_device": "/dev/nbd4", 00:18:15.034 "bdev_name": "Nvme2n2" 00:18:15.034 }, 00:18:15.034 { 00:18:15.034 "nbd_device": "/dev/nbd5", 00:18:15.034 "bdev_name": "Nvme2n3" 00:18:15.034 }, 00:18:15.034 { 00:18:15.034 "nbd_device": "/dev/nbd6", 00:18:15.034 "bdev_name": "Nvme3n1" 00:18:15.034 } 00:18:15.034 ]' 00:18:15.034 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:15.034 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:15.034 { 00:18:15.034 "nbd_device": "/dev/nbd0", 00:18:15.034 "bdev_name": "Nvme0n1p1" 00:18:15.034 }, 00:18:15.034 { 00:18:15.034 "nbd_device": "/dev/nbd1", 00:18:15.034 "bdev_name": "Nvme0n1p2" 00:18:15.034 }, 00:18:15.034 { 00:18:15.034 "nbd_device": "/dev/nbd2", 00:18:15.034 "bdev_name": "Nvme1n1" 00:18:15.034 }, 00:18:15.034 { 00:18:15.034 "nbd_device": "/dev/nbd3", 00:18:15.034 "bdev_name": "Nvme2n1" 00:18:15.034 }, 00:18:15.034 { 00:18:15.034 "nbd_device": "/dev/nbd4", 00:18:15.034 "bdev_name": "Nvme2n2" 00:18:15.034 }, 00:18:15.034 { 00:18:15.034 "nbd_device": "/dev/nbd5", 00:18:15.034 "bdev_name": "Nvme2n3" 00:18:15.034 }, 00:18:15.034 { 00:18:15.034 "nbd_device": "/dev/nbd6", 00:18:15.034 "bdev_name": "Nvme3n1" 00:18:15.034 } 00:18:15.034 ]' 00:18:15.034 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:15.034 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:18:15.034 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:15.034 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:18:15.034 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:15.034 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:15.034 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:15.034 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:15.329 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:18:15.603 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:18:15.603 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:18:15.603 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:18:15.603 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:15.603 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:15.603 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:18:15.603 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:15.603 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:15.603 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:15.603 12:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:18:15.878 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:18:15.878 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:18:15.878 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:18:15.878 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:15.878 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:15.878 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:18:15.878 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:15.878 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:15.878 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:15.878 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:16.137 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:18:16.397 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:18:16.397 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:18:16.397 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:18:16.397 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:16.397 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:16.397 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:18:16.397 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:16.397 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:16.397 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:16.397 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.397 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:16.655 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:16.655 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:16.655 12:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:18:16.655 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:16.656 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:18:16.656 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:16.656 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:16.656 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:16.656 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:16.656 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:16.656 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:16.656 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:18:16.915 /dev/nbd0 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:16.915 1+0 records in 00:18:16.915 1+0 records out 00:18:16.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450551 s, 9.1 MB/s 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:16.915 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:18:17.174 /dev/nbd1 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.174 1+0 records in 00:18:17.174 1+0 records out 00:18:17.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445269 s, 9.2 MB/s 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:17.174 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:18:17.433 /dev/nbd10 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.433 1+0 records in 00:18:17.433 1+0 records out 00:18:17.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00089208 s, 4.6 MB/s 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:17.433 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:18:17.692 /dev/nbd11 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.692 1+0 records in 00:18:17.692 1+0 records out 00:18:17.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743768 s, 5.5 MB/s 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:17.692 12:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:18:17.692 /dev/nbd12 00:18:17.692 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.952 1+0 records in 00:18:17.952 1+0 records out 00:18:17.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000866098 s, 4.7 MB/s 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:18:17.952 /dev/nbd13 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.952 1+0 records in 00:18:17.952 1+0 records out 00:18:17.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00085258 s, 4.8 MB/s 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:17.952 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:18:18.211 /dev/nbd14 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.211 1+0 records in 00:18:18.211 1+0 records out 00:18:18.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000828928 s, 4.9 MB/s 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.211 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:18.212 12:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:18:18.212 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:18.212 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:18:18.212 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:18.212 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:18.212 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:18.470 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:18.470 { 00:18:18.470 "nbd_device": "/dev/nbd0", 00:18:18.470 "bdev_name": "Nvme0n1p1" 00:18:18.470 }, 00:18:18.470 { 00:18:18.470 "nbd_device": "/dev/nbd1", 00:18:18.470 "bdev_name": "Nvme0n1p2" 00:18:18.470 }, 00:18:18.470 { 00:18:18.470 "nbd_device": "/dev/nbd10", 00:18:18.470 "bdev_name": "Nvme1n1" 00:18:18.470 }, 00:18:18.470 { 00:18:18.470 "nbd_device": "/dev/nbd11", 00:18:18.470 "bdev_name": "Nvme2n1" 00:18:18.470 }, 00:18:18.470 { 00:18:18.470 "nbd_device": "/dev/nbd12", 00:18:18.470 "bdev_name": "Nvme2n2" 00:18:18.470 }, 00:18:18.470 { 00:18:18.470 "nbd_device": "/dev/nbd13", 00:18:18.470 "bdev_name": "Nvme2n3" 00:18:18.470 }, 00:18:18.470 { 00:18:18.470 "nbd_device": "/dev/nbd14", 00:18:18.470 "bdev_name": "Nvme3n1" 00:18:18.470 } 00:18:18.470 ]' 00:18:18.470 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:18.470 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:18.470 { 00:18:18.470 "nbd_device": "/dev/nbd0", 00:18:18.470 "bdev_name": "Nvme0n1p1" 00:18:18.470 }, 00:18:18.470 { 00:18:18.470 "nbd_device": "/dev/nbd1", 00:18:18.470 "bdev_name": "Nvme0n1p2" 00:18:18.470 }, 00:18:18.470 { 00:18:18.470 "nbd_device": "/dev/nbd10", 00:18:18.470 "bdev_name": "Nvme1n1" 00:18:18.470 }, 00:18:18.470 { 00:18:18.470 "nbd_device": "/dev/nbd11", 00:18:18.470 "bdev_name": "Nvme2n1" 00:18:18.470 }, 00:18:18.470 { 00:18:18.470 "nbd_device": "/dev/nbd12", 00:18:18.470 "bdev_name": "Nvme2n2" 00:18:18.470 }, 00:18:18.470 { 00:18:18.471 "nbd_device": "/dev/nbd13", 00:18:18.471 "bdev_name": "Nvme2n3" 00:18:18.471 }, 00:18:18.471 { 00:18:18.471 "nbd_device": "/dev/nbd14", 00:18:18.471 "bdev_name": "Nvme3n1" 00:18:18.471 } 00:18:18.471 ]' 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:18.471 /dev/nbd1 00:18:18.471 /dev/nbd10 00:18:18.471 /dev/nbd11 00:18:18.471 /dev/nbd12 00:18:18.471 /dev/nbd13 00:18:18.471 /dev/nbd14' 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:18.471 /dev/nbd1 00:18:18.471 /dev/nbd10 00:18:18.471 /dev/nbd11 00:18:18.471 /dev/nbd12 00:18:18.471 /dev/nbd13 00:18:18.471 /dev/nbd14' 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:18.471 256+0 records in 00:18:18.471 256+0 records out 00:18:18.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112363 s, 93.3 MB/s 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:18.471 12:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:18.729 256+0 records in 00:18:18.729 256+0 records out 00:18:18.729 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138394 s, 7.6 MB/s 00:18:18.729 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:18.729 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:18.988 256+0 records in 00:18:18.988 256+0 records out 00:18:18.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135853 s, 7.7 MB/s 00:18:18.988 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:18.988 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:18:18.988 256+0 records in 00:18:18.988 256+0 records out 00:18:18.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13347 s, 7.9 MB/s 00:18:18.988 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:18.988 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:18:19.247 256+0 records in 00:18:19.247 256+0 records out 00:18:19.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134648 s, 7.8 MB/s 00:18:19.247 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:19.247 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:18:19.247 256+0 records in 00:18:19.247 256+0 records out 00:18:19.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136827 s, 7.7 MB/s 00:18:19.247 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:19.247 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:18:19.506 256+0 records in 00:18:19.506 256+0 records out 00:18:19.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135791 s, 7.7 MB/s 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:18:19.506 256+0 records in 00:18:19.506 256+0 records out 00:18:19.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137075 s, 7.6 MB/s 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:19.506 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:18:19.765 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:19.765 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:18:19.765 12:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.765 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:20.024 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:20.024 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:20.024 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:20.024 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.024 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.024 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:20.024 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.024 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.024 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.024 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:18:20.283 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:18:20.283 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:18:20.283 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:18:20.283 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.283 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.283 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:18:20.283 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.283 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.283 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.283 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:18:20.542 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:18:20.542 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:18:20.542 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:18:20.542 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.542 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.542 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:18:20.542 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.542 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.542 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.542 12:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:18:20.542 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:18:20.542 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:18:20.542 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:18:20.542 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.542 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.542 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:18:20.801 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.802 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.802 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.802 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:18:20.802 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:18:20.802 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:18:20.802 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:18:20.802 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.802 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.802 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:18:20.802 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.802 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.802 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.802 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:18:21.060 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:18:21.060 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:18:21.060 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:18:21.060 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:21.060 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:21.060 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:18:21.060 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:21.060 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:21.060 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:21.060 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:21.060 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:21.318 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:21.318 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:21.318 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:21.318 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:21.318 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:21.318 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:21.318 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:21.318 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:21.318 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:21.318 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:21.318 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:21.318 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:21.319 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:18:21.319 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:21.319 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:18:21.319 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:18:21.319 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:18:21.319 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:21.577 malloc_lvol_verify 00:18:21.577 12:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:21.838 739af710-4a88-4f06-8c3c-3584287f6510 00:18:21.838 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:21.838 6f8f7328-ddb0-4ab4-a86c-7c3bf02f5861 00:18:21.838 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:22.123 /dev/nbd0 00:18:22.123 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:18:22.123 mke2fs 1.46.5 (30-Dec-2021) 00:18:22.123 Discarding device blocks: 0/4096 done 00:18:22.123 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:22.123 00:18:22.123 Allocating group tables: 0/1 done 00:18:22.123 Writing inode tables: 0/1 done 00:18:22.123 Creating journal (1024 blocks): done 00:18:22.123 Writing superblocks and filesystem accounting information: 0/1 done 00:18:22.123 00:18:22.123 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:18:22.123 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:22.123 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:22.123 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:22.123 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:22.123 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:22.123 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:22.123 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 68439 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 68439 ']' 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 68439 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68439 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:22.382 killing process with pid 68439 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68439' 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 68439 00:18:22.382 12:20:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 68439 00:18:23.761 12:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:18:23.761 00:18:23.761 real 0m12.067s 00:18:23.761 user 0m15.285s 00:18:23.761 sys 0m4.947s 00:18:23.761 ************************************ 00:18:23.761 END TEST bdev_nbd 00:18:23.761 ************************************ 00:18:23.761 12:20:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:23.761 12:20:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:24.019 12:20:33 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:18:24.019 12:20:33 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:18:24.019 12:20:33 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:18:24.019 12:20:33 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:18:24.019 12:20:33 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:18:24.019 skipping fio tests on NVMe due to multi-ns failures. 00:18:24.019 12:20:33 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:24.019 12:20:33 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:24.019 12:20:33 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:18:24.019 12:20:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:24.019 12:20:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:24.019 ************************************ 00:18:24.019 START TEST bdev_verify 00:18:24.019 ************************************ 00:18:24.019 12:20:33 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:24.019 [2024-07-10 12:20:33.360771] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:18:24.019 [2024-07-10 12:20:33.360938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68863 ] 00:18:24.278 [2024-07-10 12:20:33.532388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:24.537 [2024-07-10 12:20:33.780959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.537 [2024-07-10 12:20:33.780997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.170 Running I/O for 5 seconds... 00:18:30.443 00:18:30.443 Latency(us) 00:18:30.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.443 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.443 Verification LBA range: start 0x0 length 0x5e800 00:18:30.443 Nvme0n1p1 : 5.06 1417.75 5.54 0.00 0.00 89984.91 19476.56 97698.65 00:18:30.443 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.443 Verification LBA range: start 0x5e800 length 0x5e800 00:18:30.443 Nvme0n1p1 : 5.04 1396.39 5.45 0.00 0.00 91378.43 22424.37 96856.42 00:18:30.443 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.443 Verification LBA range: start 0x0 length 0x5e7ff 00:18:30.443 Nvme0n1p2 : 5.06 1417.32 5.54 0.00 0.00 89846.05 21792.69 90118.58 00:18:30.443 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.443 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:18:30.443 Nvme0n1p2 : 5.04 1395.96 5.45 0.00 0.00 91232.63 23477.15 89276.35 00:18:30.443 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.443 Verification LBA range: start 0x0 length 0xa0000 00:18:30.443 Nvme1n1 : 5.06 1416.94 5.53 0.00 0.00 89530.62 20002.96 77064.02 00:18:30.443 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.443 Verification LBA range: start 0xa0000 length 0xa0000 00:18:30.443 Nvme1n1 : 5.08 1410.05 5.51 0.00 0.00 90098.52 13475.68 80011.82 00:18:30.443 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.443 Verification LBA range: start 0x0 length 0x80000 00:18:30.443 Nvme2n1 : 5.08 1424.32 5.56 0.00 0.00 88905.46 6000.89 75379.56 00:18:30.443 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.443 Verification LBA range: start 0x80000 length 0x80000 00:18:30.443 Nvme2n1 : 5.08 1409.66 5.51 0.00 0.00 89966.40 12528.17 78748.48 00:18:30.443 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.443 Verification LBA range: start 0x0 length 0x80000 00:18:30.443 Nvme2n2 : 5.09 1434.07 5.60 0.00 0.00 88269.41 7422.15 77064.02 00:18:30.443 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.443 Verification LBA range: start 0x80000 length 0x80000 00:18:30.443 Nvme2n2 : 5.09 1409.34 5.51 0.00 0.00 89841.69 12159.69 77064.02 00:18:30.443 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.443 Verification LBA range: start 0x0 length 0x80000 00:18:30.443 Nvme2n3 : 5.09 1433.42 5.60 0.00 0.00 88140.09 8685.49 79590.71 00:18:30.443 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.443 Verification LBA range: start 0x80000 length 0x80000 00:18:30.443 Nvme2n3 : 5.09 1408.96 5.50 0.00 0.00 89696.60 11633.30 75800.67 00:18:30.443 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.443 Verification LBA range: start 0x0 length 0x20000 00:18:30.443 Nvme3n1 : 5.09 1432.81 5.60 0.00 0.00 88012.00 8896.05 81275.17 00:18:30.443 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.443 Verification LBA range: start 0x20000 length 0x20000 00:18:30.443 Nvme3n1 : 5.09 1408.32 5.50 0.00 0.00 89569.69 11896.49 77906.25 00:18:30.443 =================================================================================================================== 00:18:30.443 Total : 19815.33 77.40 0.00 0.00 89594.81 6000.89 97698.65 00:18:32.349 00:18:32.349 real 0m8.043s 00:18:32.349 user 0m14.601s 00:18:32.349 sys 0m0.303s 00:18:32.349 12:20:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:32.349 12:20:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:32.349 ************************************ 00:18:32.349 END TEST bdev_verify 00:18:32.349 ************************************ 00:18:32.349 12:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:18:32.349 12:20:41 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:32.349 12:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:18:32.349 12:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:32.349 12:20:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:32.349 ************************************ 00:18:32.349 START TEST bdev_verify_big_io 00:18:32.349 ************************************ 00:18:32.349 12:20:41 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:32.349 [2024-07-10 12:20:41.474924] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:18:32.349 [2024-07-10 12:20:41.475087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68967 ] 00:18:32.349 [2024-07-10 12:20:41.649867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:32.607 [2024-07-10 12:20:41.899878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.607 [2024-07-10 12:20:41.899917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.564 Running I/O for 5 seconds... 00:18:40.127 00:18:40.127 Latency(us) 00:18:40.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.127 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:40.127 Verification LBA range: start 0x0 length 0x5e80 00:18:40.127 Nvme0n1p1 : 5.63 125.72 7.86 0.00 0.00 979764.32 34110.30 1125218.90 00:18:40.127 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:40.127 Verification LBA range: start 0x5e80 length 0x5e80 00:18:40.127 Nvme0n1p1 : 5.74 128.51 8.03 0.00 0.00 951778.07 41479.81 1051102.69 00:18:40.127 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:40.127 Verification LBA range: start 0x0 length 0x5e7f 00:18:40.127 Nvme0n1p2 : 5.74 132.21 8.26 0.00 0.00 914041.73 80011.82 1118481.07 00:18:40.127 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:40.127 Verification LBA range: start 0x5e7f length 0x5e7f 00:18:40.127 Nvme0n1p2 : 5.74 134.56 8.41 0.00 0.00 886064.07 104436.49 902870.26 00:18:40.127 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:40.127 Verification LBA range: start 0x0 length 0xa000 00:18:40.127 Nvme1n1 : 5.82 132.39 8.27 0.00 0.00 881269.80 106963.17 976986.47 00:18:40.127 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:40.127 Verification LBA range: start 0xa000 length 0xa000 00:18:40.127 Nvme1n1 : 5.75 137.47 8.59 0.00 0.00 851136.56 104436.49 778220.26 00:18:40.127 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:40.127 Verification LBA range: start 0x0 length 0x8000 00:18:40.127 Nvme2n1 : 5.82 137.31 8.58 0.00 0.00 839561.25 73273.99 990462.15 00:18:40.127 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:40.127 Verification LBA range: start 0x8000 length 0x8000 00:18:40.127 Nvme2n1 : 5.81 143.13 8.95 0.00 0.00 803301.40 63167.23 784958.10 00:18:40.127 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:40.127 Verification LBA range: start 0x0 length 0x8000 00:18:40.127 Nvme2n2 : 5.84 142.21 8.89 0.00 0.00 793590.07 22108.53 859074.31 00:18:40.127 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:40.127 Verification LBA range: start 0x8000 length 0x8000 00:18:40.127 Nvme2n2 : 5.84 139.73 8.73 0.00 0.00 806834.33 24108.83 1596867.55 00:18:40.127 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:40.127 Verification LBA range: start 0x0 length 0x8000 00:18:40.127 Nvme2n3 : 5.91 147.85 9.24 0.00 0.00 740684.12 26530.24 875918.91 00:18:40.127 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:40.127 Verification LBA range: start 0x8000 length 0x8000 00:18:40.127 Nvme2n3 : 5.91 143.82 8.99 0.00 0.00 759950.19 34741.98 1623818.90 00:18:40.127 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:40.127 Verification LBA range: start 0x0 length 0x2000 00:18:40.127 Nvme3n1 : 5.93 161.51 10.09 0.00 0.00 664928.59 1243.60 1010675.66 00:18:40.127 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:40.127 Verification LBA range: start 0x2000 length 0x2000 00:18:40.127 Nvme3n1 : 5.92 159.87 9.99 0.00 0.00 670817.88 1533.12 1644032.41 00:18:40.127 =================================================================================================================== 00:18:40.127 Total : 1966.30 122.89 0.00 0.00 816818.36 1243.60 1644032.41 00:18:41.501 00:18:41.501 real 0m9.574s 00:18:41.501 user 0m17.575s 00:18:41.501 sys 0m0.361s 00:18:41.501 12:20:50 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:41.501 ************************************ 00:18:41.501 END TEST bdev_verify_big_io 00:18:41.501 ************************************ 00:18:41.501 12:20:50 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:41.760 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:18:41.760 12:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:41.760 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:18:41.760 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:41.760 12:20:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:41.760 ************************************ 00:18:41.760 START TEST bdev_write_zeroes 00:18:41.760 ************************************ 00:18:41.760 12:20:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:41.760 [2024-07-10 12:20:51.122509] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:18:41.760 [2024-07-10 12:20:51.122694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69088 ] 00:18:42.019 [2024-07-10 12:20:51.288846] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.277 [2024-07-10 12:20:51.538967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.844 Running I/O for 1 seconds... 00:18:44.214 00:18:44.214 Latency(us) 00:18:44.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.214 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:44.214 Nvme0n1p1 : 1.01 10039.84 39.22 0.00 0.00 12711.19 9159.25 36847.55 00:18:44.214 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:44.214 Nvme0n1p2 : 1.01 10028.71 39.17 0.00 0.00 12706.15 9317.17 36636.99 00:18:44.214 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:44.214 Nvme1n1 : 1.02 10019.18 39.14 0.00 0.00 12664.34 9685.64 34320.86 00:18:44.214 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:44.214 Nvme2n1 : 1.02 10046.80 39.25 0.00 0.00 12589.91 8053.82 32004.73 00:18:44.214 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:44.214 Nvme2n2 : 1.02 10076.00 39.36 0.00 0.00 12493.79 5263.94 27583.02 00:18:44.214 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:44.214 Nvme2n3 : 1.03 10106.03 39.48 0.00 0.00 12410.84 4816.50 24003.55 00:18:44.214 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:44.214 Nvme3n1 : 1.03 10096.83 39.44 0.00 0.00 12387.35 5079.70 22424.37 00:18:44.214 =================================================================================================================== 00:18:44.214 Total : 70413.39 275.05 0.00 0.00 12565.22 4816.50 36847.55 00:18:45.185 00:18:45.185 real 0m3.636s 00:18:45.185 user 0m3.238s 00:18:45.185 sys 0m0.281s 00:18:45.444 12:20:54 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:45.444 12:20:54 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:45.444 ************************************ 00:18:45.444 END TEST bdev_write_zeroes 00:18:45.444 ************************************ 00:18:45.444 12:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:18:45.444 12:20:54 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:45.444 12:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:18:45.444 12:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:45.444 12:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:45.444 ************************************ 00:18:45.444 START TEST bdev_json_nonenclosed 00:18:45.444 ************************************ 00:18:45.444 12:20:54 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:45.444 [2024-07-10 12:20:54.832224] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:18:45.444 [2024-07-10 12:20:54.832369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69146 ] 00:18:45.703 [2024-07-10 12:20:55.005817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.961 [2024-07-10 12:20:55.247854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.961 [2024-07-10 12:20:55.247956] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:45.961 [2024-07-10 12:20:55.247977] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:45.961 [2024-07-10 12:20:55.247993] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:46.530 00:18:46.530 real 0m0.978s 00:18:46.530 user 0m0.715s 00:18:46.530 sys 0m0.156s 00:18:46.530 12:20:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:18:46.530 12:20:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:46.530 12:20:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:46.530 ************************************ 00:18:46.530 END TEST bdev_json_nonenclosed 00:18:46.530 ************************************ 00:18:46.530 12:20:55 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:18:46.530 12:20:55 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:18:46.530 12:20:55 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:46.530 12:20:55 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:18:46.530 12:20:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:46.530 12:20:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:46.530 ************************************ 00:18:46.530 START TEST bdev_json_nonarray 00:18:46.530 ************************************ 00:18:46.530 12:20:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:46.530 [2024-07-10 12:20:55.884436] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:18:46.530 [2024-07-10 12:20:55.884578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69178 ] 00:18:46.788 [2024-07-10 12:20:56.055680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.047 [2024-07-10 12:20:56.299811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.047 [2024-07-10 12:20:56.299925] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:47.047 [2024-07-10 12:20:56.299954] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:47.047 [2024-07-10 12:20:56.299969] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:47.305 00:18:47.305 real 0m0.973s 00:18:47.305 user 0m0.715s 00:18:47.305 sys 0m0.153s 00:18:47.305 12:20:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:18:47.305 12:20:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:47.305 12:20:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:47.305 ************************************ 00:18:47.305 END TEST bdev_json_nonarray 00:18:47.305 ************************************ 00:18:47.563 12:20:56 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:18:47.563 12:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:18:47.563 12:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:18:47.563 12:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:18:47.563 12:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:18:47.563 12:20:56 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:47.563 12:20:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.563 12:20:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:47.563 ************************************ 00:18:47.563 START TEST bdev_gpt_uuid 00:18:47.563 ************************************ 00:18:47.563 12:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:18:47.563 12:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:18:47.563 12:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:18:47.563 12:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69209 00:18:47.563 12:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:47.563 12:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:47.563 12:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 69209 00:18:47.563 12:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 69209 ']' 00:18:47.563 12:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.563 12:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.563 12:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.563 12:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.563 12:20:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:47.563 [2024-07-10 12:20:56.946908] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:18:47.563 [2024-07-10 12:20:56.947046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69209 ] 00:18:47.822 [2024-07-10 12:20:57.119911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.081 [2024-07-10 12:20:57.360725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.016 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.016 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:18:49.016 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:49.016 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.016 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:49.275 Some configs were skipped because the RPC state that can call them passed over. 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:18:49.275 { 00:18:49.275 "name": "Nvme0n1p1", 00:18:49.275 "aliases": [ 00:18:49.275 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:18:49.275 ], 00:18:49.275 "product_name": "GPT Disk", 00:18:49.275 "block_size": 4096, 00:18:49.275 "num_blocks": 774144, 00:18:49.275 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:18:49.275 "md_size": 64, 00:18:49.275 "md_interleave": false, 00:18:49.275 "dif_type": 0, 00:18:49.275 "assigned_rate_limits": { 00:18:49.275 "rw_ios_per_sec": 0, 00:18:49.275 "rw_mbytes_per_sec": 0, 00:18:49.275 "r_mbytes_per_sec": 0, 00:18:49.275 "w_mbytes_per_sec": 0 00:18:49.275 }, 00:18:49.275 "claimed": false, 00:18:49.275 "zoned": false, 00:18:49.275 "supported_io_types": { 00:18:49.275 "read": true, 00:18:49.275 "write": true, 00:18:49.275 "unmap": true, 00:18:49.275 "flush": true, 00:18:49.275 "reset": true, 00:18:49.275 "nvme_admin": false, 00:18:49.275 "nvme_io": false, 00:18:49.275 "nvme_io_md": false, 00:18:49.275 "write_zeroes": true, 00:18:49.275 "zcopy": false, 00:18:49.275 "get_zone_info": false, 00:18:49.275 "zone_management": false, 00:18:49.275 "zone_append": false, 00:18:49.275 "compare": true, 00:18:49.275 "compare_and_write": false, 00:18:49.275 "abort": true, 00:18:49.275 "seek_hole": false, 00:18:49.275 "seek_data": false, 00:18:49.275 "copy": true, 00:18:49.275 "nvme_iov_md": false 00:18:49.275 }, 00:18:49.275 "driver_specific": { 00:18:49.275 "gpt": { 00:18:49.275 "base_bdev": "Nvme0n1", 00:18:49.275 "offset_blocks": 256, 00:18:49.275 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:18:49.275 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:18:49.275 "partition_name": "SPDK_TEST_first" 00:18:49.275 } 00:18:49.275 } 00:18:49.275 } 00:18:49.275 ]' 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:18:49.275 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:18:49.534 { 00:18:49.534 "name": "Nvme0n1p2", 00:18:49.534 "aliases": [ 00:18:49.534 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:18:49.534 ], 00:18:49.534 "product_name": "GPT Disk", 00:18:49.534 "block_size": 4096, 00:18:49.534 "num_blocks": 774143, 00:18:49.534 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:18:49.534 "md_size": 64, 00:18:49.534 "md_interleave": false, 00:18:49.534 "dif_type": 0, 00:18:49.534 "assigned_rate_limits": { 00:18:49.534 "rw_ios_per_sec": 0, 00:18:49.534 "rw_mbytes_per_sec": 0, 00:18:49.534 "r_mbytes_per_sec": 0, 00:18:49.534 "w_mbytes_per_sec": 0 00:18:49.534 }, 00:18:49.534 "claimed": false, 00:18:49.534 "zoned": false, 00:18:49.534 "supported_io_types": { 00:18:49.534 "read": true, 00:18:49.534 "write": true, 00:18:49.534 "unmap": true, 00:18:49.534 "flush": true, 00:18:49.534 "reset": true, 00:18:49.534 "nvme_admin": false, 00:18:49.534 "nvme_io": false, 00:18:49.534 "nvme_io_md": false, 00:18:49.534 "write_zeroes": true, 00:18:49.534 "zcopy": false, 00:18:49.534 "get_zone_info": false, 00:18:49.534 "zone_management": false, 00:18:49.534 "zone_append": false, 00:18:49.534 "compare": true, 00:18:49.534 "compare_and_write": false, 00:18:49.534 "abort": true, 00:18:49.534 "seek_hole": false, 00:18:49.534 "seek_data": false, 00:18:49.534 "copy": true, 00:18:49.534 "nvme_iov_md": false 00:18:49.534 }, 00:18:49.534 "driver_specific": { 00:18:49.534 "gpt": { 00:18:49.534 "base_bdev": "Nvme0n1", 00:18:49.534 "offset_blocks": 774400, 00:18:49.534 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:18:49.534 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:18:49.534 "partition_name": "SPDK_TEST_second" 00:18:49.534 } 00:18:49.534 } 00:18:49.534 } 00:18:49.534 ]' 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 69209 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 69209 ']' 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 69209 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69209 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:49.534 killing process with pid 69209 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69209' 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 69209 00:18:49.534 12:20:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 69209 00:18:52.066 00:18:52.066 real 0m4.532s 00:18:52.066 user 0m4.605s 00:18:52.066 sys 0m0.550s 00:18:52.066 12:21:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:52.066 12:21:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:18:52.066 ************************************ 00:18:52.066 END TEST bdev_gpt_uuid 00:18:52.066 ************************************ 00:18:52.066 12:21:01 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:18:52.066 12:21:01 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:18:52.066 12:21:01 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:18:52.066 12:21:01 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:18:52.066 12:21:01 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:52.066 12:21:01 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:52.066 12:21:01 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:18:52.066 12:21:01 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:18:52.066 12:21:01 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:18:52.066 12:21:01 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:52.632 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:52.907 Waiting for block devices as requested 00:18:52.907 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:52.907 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:53.166 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:53.166 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:58.434 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:58.434 12:21:07 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:18:58.434 12:21:07 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:18:58.694 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:18:58.694 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:18:58.694 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:18:58.694 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:18:58.694 12:21:07 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:18:58.694 ************************************ 00:18:58.694 END TEST blockdev_nvme_gpt 00:18:58.694 ************************************ 00:18:58.694 00:18:58.694 real 1m7.500s 00:18:58.694 user 1m22.983s 00:18:58.694 sys 0m11.733s 00:18:58.694 12:21:07 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:58.694 12:21:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:18:58.694 12:21:08 -- common/autotest_common.sh@1142 -- # return 0 00:18:58.694 12:21:08 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:18:58.694 12:21:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:58.694 12:21:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:58.694 12:21:08 -- common/autotest_common.sh@10 -- # set +x 00:18:58.694 ************************************ 00:18:58.694 START TEST nvme 00:18:58.694 ************************************ 00:18:58.694 12:21:08 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:18:58.694 * Looking for test storage... 00:18:58.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:18:58.694 12:21:08 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:59.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:00.200 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:00.200 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:00.200 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:00.200 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:00.460 12:21:09 nvme -- nvme/nvme.sh@79 -- # uname 00:19:00.460 12:21:09 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:19:00.460 12:21:09 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:19:00.460 12:21:09 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:19:00.460 12:21:09 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:19:00.460 12:21:09 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:19:00.460 12:21:09 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:19:00.460 12:21:09 nvme -- common/autotest_common.sh@1069 -- # stubpid=69861 00:19:00.460 Waiting for stub to ready for secondary processes... 00:19:00.460 12:21:09 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:19:00.460 12:21:09 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:19:00.460 12:21:09 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:19:00.460 12:21:09 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69861 ]] 00:19:00.460 12:21:09 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:19:00.460 [2024-07-10 12:21:09.800555] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:19:00.460 [2024-07-10 12:21:09.800694] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:19:01.397 12:21:10 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:19:01.397 12:21:10 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69861 ]] 00:19:01.397 12:21:10 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:19:01.655 [2024-07-10 12:21:11.105146] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:01.914 [2024-07-10 12:21:11.329568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.914 [2024-07-10 12:21:11.329708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.914 [2024-07-10 12:21:11.329774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.914 [2024-07-10 12:21:11.349310] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:19:01.914 [2024-07-10 12:21:11.349352] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:01.914 [2024-07-10 12:21:11.366810] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:19:01.914 [2024-07-10 12:21:11.366974] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:19:01.914 [2024-07-10 12:21:11.370690] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:01.914 [2024-07-10 12:21:11.371061] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:19:01.914 [2024-07-10 12:21:11.371152] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:19:01.914 [2024-07-10 12:21:11.375001] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:01.914 [2024-07-10 12:21:11.375394] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:19:01.914 [2024-07-10 12:21:11.375471] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:19:01.914 [2024-07-10 12:21:11.379194] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:19:01.914 [2024-07-10 12:21:11.379623] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:19:01.914 [2024-07-10 12:21:11.379709] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:19:01.914 [2024-07-10 12:21:11.379782] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:19:01.914 [2024-07-10 12:21:11.379842] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:19:02.482 12:21:11 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:19:02.482 done. 00:19:02.482 12:21:11 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:19:02.482 12:21:11 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:02.482 12:21:11 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:19:02.482 12:21:11 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:02.482 12:21:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:02.482 ************************************ 00:19:02.482 START TEST nvme_reset 00:19:02.482 ************************************ 00:19:02.482 12:21:11 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:02.741 Initializing NVMe Controllers 00:19:02.741 Skipping QEMU NVMe SSD at 0000:00:10.0 00:19:02.741 Skipping QEMU NVMe SSD at 0000:00:11.0 00:19:02.741 Skipping QEMU NVMe SSD at 0000:00:13.0 00:19:02.741 Skipping QEMU NVMe SSD at 0000:00:12.0 00:19:02.741 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:19:02.741 00:19:02.741 real 0m0.277s 00:19:02.741 user 0m0.100s 00:19:02.741 sys 0m0.133s 00:19:02.741 12:21:12 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:02.741 12:21:12 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:19:02.741 ************************************ 00:19:02.741 END TEST nvme_reset 00:19:02.741 ************************************ 00:19:02.741 12:21:12 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:02.741 12:21:12 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:19:02.741 12:21:12 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:02.741 12:21:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:02.741 12:21:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:02.741 ************************************ 00:19:02.741 START TEST nvme_identify 00:19:02.741 ************************************ 00:19:02.741 12:21:12 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:19:02.741 12:21:12 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:19:02.741 12:21:12 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:19:02.741 12:21:12 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:19:02.741 12:21:12 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:19:02.741 12:21:12 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:02.741 12:21:12 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:19:02.741 12:21:12 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:02.741 12:21:12 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:02.741 12:21:12 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:02.741 12:21:12 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:19:02.742 12:21:12 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:19:02.742 12:21:12 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:19:03.004 ===================================================== 00:19:03.004 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:03.004 ===================================================== 00:19:03.004 Controller Capabilities/Features 00:19:03.004 ================================ 00:19:03.004 Vendor ID: 1b36 00:19:03.004 Subsystem Vendor ID: 1af4 00:19:03.004 Serial Number: 12340 00:19:03.004 Model Number: QEMU NVMe Ctrl 00:19:03.004 Firmware Version: 8.0.0 00:19:03.004 Recommended Arb Burst: 6 00:19:03.004 IEEE OUI Identifier: 00 54 52 00:19:03.004 Multi-path I/O 00:19:03.004 May have multiple subsystem ports: No 00:19:03.004 May have multiple controllers: No 00:19:03.004 Associated with SR-IOV VF: No 00:19:03.004 Max Data Transfer Size: 524288 00:19:03.004 Max Number of Namespaces: 256 00:19:03.004 Max Number of I/O Queues: 64 00:19:03.004 NVMe Specification Version (VS): 1.4 00:19:03.004 NVMe Specification Version (Identify): 1.4 00:19:03.004 Maximum Queue Entries: 2048 00:19:03.004 Contiguous Queues Required: Yes 00:19:03.004 Arbitration Mechanisms Supported 00:19:03.004 Weighted Round Robin: Not Supported 00:19:03.004 Vendor Specific: Not Supported 00:19:03.004 Reset Timeout: 7500 ms 00:19:03.004 Doorbell Stride: 4 bytes 00:19:03.004 NVM Subsystem Reset: Not Supported 00:19:03.004 Command Sets Supported 00:19:03.004 NVM Command Set: Supported 00:19:03.004 Boot Partition: Not Supported 00:19:03.004 Memory Page Size Minimum: 4096 bytes 00:19:03.004 Memory Page Size Maximum: 65536 bytes 00:19:03.004 Persistent Memory Region: Not Supported 00:19:03.004 Optional Asynchronous Events Supported 00:19:03.004 Namespace Attribute Notices: Supported 00:19:03.004 Firmware Activation Notices: Not Supported 00:19:03.004 ANA Change Notices: Not Supported 00:19:03.004 PLE Aggregate Log Change Notices: Not Supported 00:19:03.004 LBA Status Info Alert Notices: Not Supported 00:19:03.004 EGE Aggregate Log Change Notices: Not Supported 00:19:03.004 Normal NVM Subsystem Shutdown event: Not Supported 00:19:03.004 Zone Descriptor Change Notices: Not Supported 00:19:03.004 Discovery Log Change Notices: Not Supported 00:19:03.004 Controller Attributes 00:19:03.004 128-bit Host Identifier: Not Supported 00:19:03.004 Non-Operational Permissive Mode: Not Supported 00:19:03.004 NVM Sets: Not Supported 00:19:03.004 Read Recovery Levels: Not Supported 00:19:03.004 Endurance Groups: Not Supported 00:19:03.004 Predictable Latency Mode: Not Supported 00:19:03.004 Traffic Based Keep ALive: Not Supported 00:19:03.004 Namespace Granularity: Not Supported 00:19:03.004 SQ Associations: Not Supported 00:19:03.004 UUID List: Not Supported 00:19:03.004 Multi-Domain Subsystem: Not Supported 00:19:03.004 Fixed Capacity Management: Not Supported 00:19:03.004 Variable Capacity Management: Not Supported 00:19:03.004 Delete Endurance Group: Not Supported 00:19:03.004 Delete NVM Set: Not Supported 00:19:03.004 Extended LBA Formats Supported: Supported 00:19:03.004 Flexible Data Placement Supported: Not Supported 00:19:03.004 00:19:03.004 Controller Memory Buffer Support 00:19:03.004 ================================ 00:19:03.004 Supported: No 00:19:03.004 00:19:03.004 Persistent Memory Region Support 00:19:03.004 ================================ 00:19:03.004 Supported: No 00:19:03.004 00:19:03.004 Admin Command Set Attributes 00:19:03.004 ============================ 00:19:03.004 Security Send/Receive: Not Supported 00:19:03.004 Format NVM: Supported 00:19:03.004 Firmware Activate/Download: Not Supported 00:19:03.004 Namespace Management: Supported 00:19:03.004 Device Self-Test: Not Supported 00:19:03.004 Directives: Supported 00:19:03.004 NVMe-MI: Not Supported 00:19:03.004 Virtualization Management: Not Supported 00:19:03.004 Doorbell Buffer Config: Supported 00:19:03.004 Get LBA Status Capability: Not Supported 00:19:03.004 Command & Feature Lockdown Capability: Not Supported 00:19:03.004 Abort Command Limit: 4 00:19:03.004 Async Event Request Limit: 4 00:19:03.004 Number of Firmware Slots: N/A 00:19:03.004 Firmware Slot 1 Read-Only: N/A 00:19:03.004 Firmware Activation Without Reset: N/A 00:19:03.004 Multiple Update Detection Support: N/A 00:19:03.004 Firmware Update Gr[2024-07-10 12:21:12.448034] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 69894 terminated unexpected 00:19:03.004 anularity: No Information Provided 00:19:03.004 Per-Namespace SMART Log: Yes 00:19:03.004 Asymmetric Namespace Access Log Page: Not Supported 00:19:03.004 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:03.004 Command Effects Log Page: Supported 00:19:03.004 Get Log Page Extended Data: Supported 00:19:03.004 Telemetry Log Pages: Not Supported 00:19:03.004 Persistent Event Log Pages: Not Supported 00:19:03.004 Supported Log Pages Log Page: May Support 00:19:03.004 Commands Supported & Effects Log Page: Not Supported 00:19:03.004 Feature Identifiers & Effects Log Page:May Support 00:19:03.004 NVMe-MI Commands & Effects Log Page: May Support 00:19:03.004 Data Area 4 for Telemetry Log: Not Supported 00:19:03.004 Error Log Page Entries Supported: 1 00:19:03.004 Keep Alive: Not Supported 00:19:03.004 00:19:03.004 NVM Command Set Attributes 00:19:03.004 ========================== 00:19:03.004 Submission Queue Entry Size 00:19:03.004 Max: 64 00:19:03.004 Min: 64 00:19:03.004 Completion Queue Entry Size 00:19:03.004 Max: 16 00:19:03.004 Min: 16 00:19:03.004 Number of Namespaces: 256 00:19:03.004 Compare Command: Supported 00:19:03.004 Write Uncorrectable Command: Not Supported 00:19:03.004 Dataset Management Command: Supported 00:19:03.004 Write Zeroes Command: Supported 00:19:03.004 Set Features Save Field: Supported 00:19:03.004 Reservations: Not Supported 00:19:03.004 Timestamp: Supported 00:19:03.004 Copy: Supported 00:19:03.004 Volatile Write Cache: Present 00:19:03.004 Atomic Write Unit (Normal): 1 00:19:03.004 Atomic Write Unit (PFail): 1 00:19:03.004 Atomic Compare & Write Unit: 1 00:19:03.004 Fused Compare & Write: Not Supported 00:19:03.004 Scatter-Gather List 00:19:03.004 SGL Command Set: Supported 00:19:03.004 SGL Keyed: Not Supported 00:19:03.004 SGL Bit Bucket Descriptor: Not Supported 00:19:03.004 SGL Metadata Pointer: Not Supported 00:19:03.004 Oversized SGL: Not Supported 00:19:03.004 SGL Metadata Address: Not Supported 00:19:03.004 SGL Offset: Not Supported 00:19:03.004 Transport SGL Data Block: Not Supported 00:19:03.004 Replay Protected Memory Block: Not Supported 00:19:03.004 00:19:03.004 Firmware Slot Information 00:19:03.004 ========================= 00:19:03.004 Active slot: 1 00:19:03.004 Slot 1 Firmware Revision: 1.0 00:19:03.004 00:19:03.004 00:19:03.004 Commands Supported and Effects 00:19:03.004 ============================== 00:19:03.004 Admin Commands 00:19:03.004 -------------- 00:19:03.004 Delete I/O Submission Queue (00h): Supported 00:19:03.004 Create I/O Submission Queue (01h): Supported 00:19:03.004 Get Log Page (02h): Supported 00:19:03.004 Delete I/O Completion Queue (04h): Supported 00:19:03.004 Create I/O Completion Queue (05h): Supported 00:19:03.004 Identify (06h): Supported 00:19:03.004 Abort (08h): Supported 00:19:03.004 Set Features (09h): Supported 00:19:03.004 Get Features (0Ah): Supported 00:19:03.004 Asynchronous Event Request (0Ch): Supported 00:19:03.004 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:03.004 Directive Send (19h): Supported 00:19:03.004 Directive Receive (1Ah): Supported 00:19:03.004 Virtualization Management (1Ch): Supported 00:19:03.004 Doorbell Buffer Config (7Ch): Supported 00:19:03.004 Format NVM (80h): Supported LBA-Change 00:19:03.004 I/O Commands 00:19:03.004 ------------ 00:19:03.004 Flush (00h): Supported LBA-Change 00:19:03.004 Write (01h): Supported LBA-Change 00:19:03.004 Read (02h): Supported 00:19:03.004 Compare (05h): Supported 00:19:03.004 Write Zeroes (08h): Supported LBA-Change 00:19:03.004 Dataset Management (09h): Supported LBA-Change 00:19:03.004 Unknown (0Ch): Supported 00:19:03.004 Unknown (12h): Supported 00:19:03.004 Copy (19h): Supported LBA-Change 00:19:03.004 Unknown (1Dh): Supported LBA-Change 00:19:03.004 00:19:03.004 Error Log 00:19:03.004 ========= 00:19:03.004 00:19:03.004 Arbitration 00:19:03.004 =========== 00:19:03.004 Arbitration Burst: no limit 00:19:03.004 00:19:03.004 Power Management 00:19:03.004 ================ 00:19:03.004 Number of Power States: 1 00:19:03.005 Current Power State: Power State #0 00:19:03.005 Power State #0: 00:19:03.005 Max Power: 25.00 W 00:19:03.005 Non-Operational State: Operational 00:19:03.005 Entry Latency: 16 microseconds 00:19:03.005 Exit Latency: 4 microseconds 00:19:03.005 Relative Read Throughput: 0 00:19:03.005 Relative Read Latency: 0 00:19:03.005 Relative Write Throughput: 0 00:19:03.005 Relative Write Latency: 0 00:19:03.005 Idle Power[2024-07-10 12:21:12.449077] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 69894 terminated unexpected 00:19:03.005 : Not Reported 00:19:03.005 Active Power: Not Reported 00:19:03.005 Non-Operational Permissive Mode: Not Supported 00:19:03.005 00:19:03.005 Health Information 00:19:03.005 ================== 00:19:03.005 Critical Warnings: 00:19:03.005 Available Spare Space: OK 00:19:03.005 Temperature: OK 00:19:03.005 Device Reliability: OK 00:19:03.005 Read Only: No 00:19:03.005 Volatile Memory Backup: OK 00:19:03.005 Current Temperature: 323 Kelvin (50 Celsius) 00:19:03.005 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:03.005 Available Spare: 0% 00:19:03.005 Available Spare Threshold: 0% 00:19:03.005 Life Percentage Used: 0% 00:19:03.005 Data Units Read: 1156 00:19:03.005 Data Units Written: 983 00:19:03.005 Host Read Commands: 54308 00:19:03.005 Host Write Commands: 52748 00:19:03.005 Controller Busy Time: 0 minutes 00:19:03.005 Power Cycles: 0 00:19:03.005 Power On Hours: 0 hours 00:19:03.005 Unsafe Shutdowns: 0 00:19:03.005 Unrecoverable Media Errors: 0 00:19:03.005 Lifetime Error Log Entries: 0 00:19:03.005 Warning Temperature Time: 0 minutes 00:19:03.005 Critical Temperature Time: 0 minutes 00:19:03.005 00:19:03.005 Number of Queues 00:19:03.005 ================ 00:19:03.005 Number of I/O Submission Queues: 64 00:19:03.005 Number of I/O Completion Queues: 64 00:19:03.005 00:19:03.005 ZNS Specific Controller Data 00:19:03.005 ============================ 00:19:03.005 Zone Append Size Limit: 0 00:19:03.005 00:19:03.005 00:19:03.005 Active Namespaces 00:19:03.005 ================= 00:19:03.005 Namespace ID:1 00:19:03.005 Error Recovery Timeout: Unlimited 00:19:03.005 Command Set Identifier: NVM (00h) 00:19:03.005 Deallocate: Supported 00:19:03.005 Deallocated/Unwritten Error: Supported 00:19:03.005 Deallocated Read Value: All 0x00 00:19:03.005 Deallocate in Write Zeroes: Not Supported 00:19:03.005 Deallocated Guard Field: 0xFFFF 00:19:03.005 Flush: Supported 00:19:03.005 Reservation: Not Supported 00:19:03.005 Metadata Transferred as: Separate Metadata Buffer 00:19:03.005 Namespace Sharing Capabilities: Private 00:19:03.005 Size (in LBAs): 1548666 (5GiB) 00:19:03.005 Capacity (in LBAs): 1548666 (5GiB) 00:19:03.005 Utilization (in LBAs): 1548666 (5GiB) 00:19:03.005 Thin Provisioning: Not Supported 00:19:03.005 Per-NS Atomic Units: No 00:19:03.005 Maximum Single Source Range Length: 128 00:19:03.005 Maximum Copy Length: 128 00:19:03.005 Maximum Source Range Count: 128 00:19:03.005 NGUID/EUI64 Never Reused: No 00:19:03.005 Namespace Write Protected: No 00:19:03.005 Number of LBA Formats: 8 00:19:03.005 Current LBA Format: LBA Format #07 00:19:03.005 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.005 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.005 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.005 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.005 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.005 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.005 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.005 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.005 00:19:03.005 NVM Specific Namespace Data 00:19:03.005 =========================== 00:19:03.005 Logical Block Storage Tag Mask: 0 00:19:03.005 Protection Information Capabilities: 00:19:03.005 16b Guard Protection Information Storage Tag Support: No 00:19:03.005 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.005 Storage Tag Check Read Support: No 00:19:03.005 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.005 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.005 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.005 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.005 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.005 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.005 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.005 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.005 ===================================================== 00:19:03.005 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:03.005 ===================================================== 00:19:03.005 Controller Capabilities/Features 00:19:03.005 ================================ 00:19:03.005 Vendor ID: 1b36 00:19:03.005 Subsystem Vendor ID: 1af4 00:19:03.005 Serial Number: 12341 00:19:03.005 Model Number: QEMU NVMe Ctrl 00:19:03.005 Firmware Version: 8.0.0 00:19:03.005 Recommended Arb Burst: 6 00:19:03.005 IEEE OUI Identifier: 00 54 52 00:19:03.005 Multi-path I/O 00:19:03.005 May have multiple subsystem ports: No 00:19:03.005 May have multiple controllers: No 00:19:03.005 Associated with SR-IOV VF: No 00:19:03.005 Max Data Transfer Size: 524288 00:19:03.005 Max Number of Namespaces: 256 00:19:03.005 Max Number of I/O Queues: 64 00:19:03.005 NVMe Specification Version (VS): 1.4 00:19:03.005 NVMe Specification Version (Identify): 1.4 00:19:03.005 Maximum Queue Entries: 2048 00:19:03.005 Contiguous Queues Required: Yes 00:19:03.005 Arbitration Mechanisms Supported 00:19:03.005 Weighted Round Robin: Not Supported 00:19:03.005 Vendor Specific: Not Supported 00:19:03.005 Reset Timeout: 7500 ms 00:19:03.005 Doorbell Stride: 4 bytes 00:19:03.005 NVM Subsystem Reset: Not Supported 00:19:03.005 Command Sets Supported 00:19:03.005 NVM Command Set: Supported 00:19:03.005 Boot Partition: Not Supported 00:19:03.005 Memory Page Size Minimum: 4096 bytes 00:19:03.005 Memory Page Size Maximum: 65536 bytes 00:19:03.005 Persistent Memory Region: Not Supported 00:19:03.005 Optional Asynchronous Events Supported 00:19:03.005 Namespace Attribute Notices: Supported 00:19:03.005 Firmware Activation Notices: Not Supported 00:19:03.005 ANA Change Notices: Not Supported 00:19:03.005 PLE Aggregate Log Change Notices: Not Supported 00:19:03.005 LBA Status Info Alert Notices: Not Supported 00:19:03.005 EGE Aggregate Log Change Notices: Not Supported 00:19:03.005 Normal NVM Subsystem Shutdown event: Not Supported 00:19:03.005 Zone Descriptor Change Notices: Not Supported 00:19:03.005 Discovery Log Change Notices: Not Supported 00:19:03.005 Controller Attributes 00:19:03.005 128-bit Host Identifier: Not Supported 00:19:03.005 Non-Operational Permissive Mode: Not Supported 00:19:03.005 NVM Sets: Not Supported 00:19:03.005 Read Recovery Levels: Not Supported 00:19:03.005 Endurance Groups: Not Supported 00:19:03.005 Predictable Latency Mode: Not Supported 00:19:03.005 Traffic Based Keep ALive: Not Supported 00:19:03.005 Namespace Granularity: Not Supported 00:19:03.005 SQ Associations: Not Supported 00:19:03.005 UUID List: Not Supported 00:19:03.005 Multi-Domain Subsystem: Not Supported 00:19:03.005 Fixed Capacity Management: Not Supported 00:19:03.005 Variable Capacity Management: Not Supported 00:19:03.005 Delete Endurance Group: Not Supported 00:19:03.005 Delete NVM Set: Not Supported 00:19:03.005 Extended LBA Formats Supported: Supported 00:19:03.005 Flexible Data Placement Supported: Not Supported 00:19:03.005 00:19:03.005 Controller Memory Buffer Support 00:19:03.005 ================================ 00:19:03.005 Supported: No 00:19:03.005 00:19:03.005 Persistent Memory Region Support 00:19:03.005 ================================ 00:19:03.005 Supported: No 00:19:03.005 00:19:03.005 Admin Command Set Attributes 00:19:03.005 ============================ 00:19:03.005 Security Send/Receive: Not Supported 00:19:03.005 Format NVM: Supported 00:19:03.005 Firmware Activate/Download: Not Supported 00:19:03.005 Namespace Management: Supported 00:19:03.005 Device Self-Test: Not Supported 00:19:03.005 Directives: Supported 00:19:03.005 NVMe-MI: Not Supported 00:19:03.005 Virtualization Management: Not Supported 00:19:03.005 Doorbell Buffer Config: Supported 00:19:03.005 Get LBA Status Capability: Not Supported 00:19:03.005 Command & Feature Lockdown Capability: Not Supported 00:19:03.006 Abort Command Limit: 4 00:19:03.006 Async Event Request Limit: 4 00:19:03.006 Number of Firmware Slots: N/A 00:19:03.006 Firmware Slot 1 Read-Only: N/A 00:19:03.006 Firmware Activation Without Reset: N/A 00:19:03.006 Multiple Update Detection Support: N/A 00:19:03.006 Firmware Update Granularity: No Information Provided 00:19:03.006 Per-Namespace SMART Log: Yes 00:19:03.006 Asymmetric Namespace Access Log Page: Not Supported 00:19:03.006 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:19:03.006 Command Effects Log Page: Supported 00:19:03.006 Get Log Page Extended Data: Supported 00:19:03.006 Telemetry Log Pages: Not Supported 00:19:03.006 Persistent Event Log Pages: Not Supported 00:19:03.006 Supported Log Pages Log Page: May Support 00:19:03.006 Commands Supported & Effects Log Page: Not Supported 00:19:03.006 Feature Identifiers & Effects Log Page:May Support 00:19:03.006 NVMe-MI Commands & Effects Log Page: May Support 00:19:03.006 Data Area 4 for Telemetry Log: Not Supported 00:19:03.006 Error Log Page Entries Supported: 1 00:19:03.006 Keep Alive: Not Supported 00:19:03.006 00:19:03.006 NVM Command Set Attributes 00:19:03.006 ========================== 00:19:03.006 Submission Queue Entry Size 00:19:03.006 Max: 64 00:19:03.006 Min: 64 00:19:03.006 Completion Queue Entry Size 00:19:03.006 Max: 16 00:19:03.006 Min: 16 00:19:03.006 Number of Namespaces: 256 00:19:03.006 Compare Command: Supported 00:19:03.006 Write Uncorrectable Command: Not Supported 00:19:03.006 Dataset Management Command: Supported 00:19:03.006 Write Zeroes Command: Supported 00:19:03.006 Set Features Save Field: Supported 00:19:03.006 Reservations: Not Supported 00:19:03.006 Timestamp: Supported 00:19:03.006 Copy: Supported 00:19:03.006 Volatile Write Cache: Present 00:19:03.006 Atomic Write Unit (Normal): 1 00:19:03.006 Atomic Write Unit (PFail): 1 00:19:03.006 Atomic Compare & Write Unit: 1 00:19:03.006 Fused Compare & Write: Not Supported 00:19:03.006 Scatter-Gather List 00:19:03.006 SGL Command Set: Supported 00:19:03.006 SGL Keyed: Not Supported 00:19:03.006 SGL Bit Bucket Descriptor: Not Supported 00:19:03.006 SGL Metadata Pointer: Not Supported 00:19:03.006 Oversized SGL: Not Supported 00:19:03.006 SGL Metadata Address: Not Supported 00:19:03.006 SGL Offset: Not Supported 00:19:03.006 Transport SGL Data Block: Not Supported 00:19:03.006 Replay Protected Memory Block: Not Supported 00:19:03.006 00:19:03.006 Firmware Slot Information 00:19:03.006 ========================= 00:19:03.006 Active slot: 1 00:19:03.006 Slot 1 Firmware Revision: 1.0 00:19:03.006 00:19:03.006 00:19:03.006 Commands Supported and Effects 00:19:03.006 ============================== 00:19:03.006 Admin Commands 00:19:03.006 -------------- 00:19:03.006 Delete I/O Submission Queue (00h): Supported 00:19:03.006 Create I/O Submission Queue (01h): Supported 00:19:03.006 Get Log Page (02h): Supported 00:19:03.006 Delete I/O Completion Queue (04h): Supported 00:19:03.006 Create I/O Completion Queue (05h): Supported 00:19:03.006 Identify (06h): Supported 00:19:03.006 Abort (08h): Supported 00:19:03.006 Set Features (09h): Supported 00:19:03.006 Get Features (0Ah): Supported 00:19:03.006 Asynchronous Event Request (0Ch): Supported 00:19:03.006 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:03.006 Directive Send (19h): Supported 00:19:03.006 Directive Receive (1Ah): Supported 00:19:03.006 Virtualization Management (1Ch): Supported 00:19:03.006 Doorbell Buffer Config (7Ch): Supported 00:19:03.006 Format NVM (80h): Supported LBA-Change 00:19:03.006 I/O Commands 00:19:03.006 ------------ 00:19:03.006 Flush (00h): Supported LBA-Change 00:19:03.006 Write (01h): Supported LBA-Change 00:19:03.006 Read (02h): Supported 00:19:03.006 Compare (05h): Supported 00:19:03.006 Write Zeroes (08h): Supported LBA-Change 00:19:03.006 Dataset Management (09h): Supported LBA-Change 00:19:03.006 Unknown (0Ch): Supported 00:19:03.006 Unknown (12h): Supported 00:19:03.006 Copy (19h): Supported LBA-Change 00:19:03.006 Unknown (1Dh): Supported LBA-Change 00:19:03.006 00:19:03.006 Error Log 00:19:03.006 ========= 00:19:03.006 00:19:03.006 Arbitration 00:19:03.006 =========== 00:19:03.006 Arbitration Burst: no limit 00:19:03.006 00:19:03.006 Power Management 00:19:03.006 ================ 00:19:03.006 Number of Power States: 1 00:19:03.006 Current Power State: Power State #0 00:19:03.006 Power State #0: 00:19:03.006 Max Power: 25.00 W 00:19:03.006 Non-Operational State: Operational 00:19:03.006 Entry Latency: 16 microseconds 00:19:03.006 Exit Latency: 4 microseconds 00:19:03.006 Relative Read Throughput: 0 00:19:03.006 Relative Read Latency: 0 00:19:03.006 Relative Write Throughput: 0 00:19:03.006 Relative Write Latency: 0 00:19:03.006 Idle Power: Not Reported 00:19:03.006 Active Power: Not Reported 00:19:03.006 Non-Operational Permissive Mode: Not Supported 00:19:03.006 00:19:03.006 Health Information 00:19:03.006 ================== 00:19:03.006 Critical Warnings: 00:19:03.006 Available Spare Space: OK 00:19:03.006 Temperature: [2024-07-10 12:21:12.450013] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 69894 terminated unexpected 00:19:03.006 OK 00:19:03.006 Device Reliability: OK 00:19:03.006 Read Only: No 00:19:03.006 Volatile Memory Backup: OK 00:19:03.006 Current Temperature: 323 Kelvin (50 Celsius) 00:19:03.006 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:03.006 Available Spare: 0% 00:19:03.006 Available Spare Threshold: 0% 00:19:03.006 Life Percentage Used: 0% 00:19:03.006 Data Units Read: 859 00:19:03.006 Data Units Written: 701 00:19:03.006 Host Read Commands: 39261 00:19:03.006 Host Write Commands: 36847 00:19:03.006 Controller Busy Time: 0 minutes 00:19:03.006 Power Cycles: 0 00:19:03.006 Power On Hours: 0 hours 00:19:03.006 Unsafe Shutdowns: 0 00:19:03.006 Unrecoverable Media Errors: 0 00:19:03.006 Lifetime Error Log Entries: 0 00:19:03.006 Warning Temperature Time: 0 minutes 00:19:03.006 Critical Temperature Time: 0 minutes 00:19:03.006 00:19:03.006 Number of Queues 00:19:03.006 ================ 00:19:03.006 Number of I/O Submission Queues: 64 00:19:03.006 Number of I/O Completion Queues: 64 00:19:03.006 00:19:03.006 ZNS Specific Controller Data 00:19:03.006 ============================ 00:19:03.006 Zone Append Size Limit: 0 00:19:03.006 00:19:03.006 00:19:03.006 Active Namespaces 00:19:03.006 ================= 00:19:03.006 Namespace ID:1 00:19:03.006 Error Recovery Timeout: Unlimited 00:19:03.006 Command Set Identifier: NVM (00h) 00:19:03.006 Deallocate: Supported 00:19:03.006 Deallocated/Unwritten Error: Supported 00:19:03.006 Deallocated Read Value: All 0x00 00:19:03.006 Deallocate in Write Zeroes: Not Supported 00:19:03.006 Deallocated Guard Field: 0xFFFF 00:19:03.006 Flush: Supported 00:19:03.006 Reservation: Not Supported 00:19:03.006 Namespace Sharing Capabilities: Private 00:19:03.006 Size (in LBAs): 1310720 (5GiB) 00:19:03.006 Capacity (in LBAs): 1310720 (5GiB) 00:19:03.006 Utilization (in LBAs): 1310720 (5GiB) 00:19:03.006 Thin Provisioning: Not Supported 00:19:03.006 Per-NS Atomic Units: No 00:19:03.006 Maximum Single Source Range Length: 128 00:19:03.006 Maximum Copy Length: 128 00:19:03.006 Maximum Source Range Count: 128 00:19:03.006 NGUID/EUI64 Never Reused: No 00:19:03.006 Namespace Write Protected: No 00:19:03.006 Number of LBA Formats: 8 00:19:03.006 Current LBA Format: LBA Format #04 00:19:03.006 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.006 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.006 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.006 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.006 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.006 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.006 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.006 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.006 00:19:03.006 NVM Specific Namespace Data 00:19:03.006 =========================== 00:19:03.006 Logical Block Storage Tag Mask: 0 00:19:03.006 Protection Information Capabilities: 00:19:03.006 16b Guard Protection Information Storage Tag Support: No 00:19:03.006 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.006 Storage Tag Check Read Support: No 00:19:03.006 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.006 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.006 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.006 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.006 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.006 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.006 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.006 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.007 ===================================================== 00:19:03.007 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:03.007 ===================================================== 00:19:03.007 Controller Capabilities/Features 00:19:03.007 ================================ 00:19:03.007 Vendor ID: 1b36 00:19:03.007 Subsystem Vendor ID: 1af4 00:19:03.007 Serial Number: 12343 00:19:03.007 Model Number: QEMU NVMe Ctrl 00:19:03.007 Firmware Version: 8.0.0 00:19:03.007 Recommended Arb Burst: 6 00:19:03.007 IEEE OUI Identifier: 00 54 52 00:19:03.007 Multi-path I/O 00:19:03.007 May have multiple subsystem ports: No 00:19:03.007 May have multiple controllers: Yes 00:19:03.007 Associated with SR-IOV VF: No 00:19:03.007 Max Data Transfer Size: 524288 00:19:03.007 Max Number of Namespaces: 256 00:19:03.007 Max Number of I/O Queues: 64 00:19:03.007 NVMe Specification Version (VS): 1.4 00:19:03.007 NVMe Specification Version (Identify): 1.4 00:19:03.007 Maximum Queue Entries: 2048 00:19:03.007 Contiguous Queues Required: Yes 00:19:03.007 Arbitration Mechanisms Supported 00:19:03.007 Weighted Round Robin: Not Supported 00:19:03.007 Vendor Specific: Not Supported 00:19:03.007 Reset Timeout: 7500 ms 00:19:03.007 Doorbell Stride: 4 bytes 00:19:03.007 NVM Subsystem Reset: Not Supported 00:19:03.007 Command Sets Supported 00:19:03.007 NVM Command Set: Supported 00:19:03.007 Boot Partition: Not Supported 00:19:03.007 Memory Page Size Minimum: 4096 bytes 00:19:03.007 Memory Page Size Maximum: 65536 bytes 00:19:03.007 Persistent Memory Region: Not Supported 00:19:03.007 Optional Asynchronous Events Supported 00:19:03.007 Namespace Attribute Notices: Supported 00:19:03.007 Firmware Activation Notices: Not Supported 00:19:03.007 ANA Change Notices: Not Supported 00:19:03.007 PLE Aggregate Log Change Notices: Not Supported 00:19:03.007 LBA Status Info Alert Notices: Not Supported 00:19:03.007 EGE Aggregate Log Change Notices: Not Supported 00:19:03.007 Normal NVM Subsystem Shutdown event: Not Supported 00:19:03.007 Zone Descriptor Change Notices: Not Supported 00:19:03.007 Discovery Log Change Notices: Not Supported 00:19:03.007 Controller Attributes 00:19:03.007 128-bit Host Identifier: Not Supported 00:19:03.007 Non-Operational Permissive Mode: Not Supported 00:19:03.007 NVM Sets: Not Supported 00:19:03.007 Read Recovery Levels: Not Supported 00:19:03.007 Endurance Groups: Supported 00:19:03.007 Predictable Latency Mode: Not Supported 00:19:03.007 Traffic Based Keep ALive: Not Supported 00:19:03.007 Namespace Granularity: Not Supported 00:19:03.007 SQ Associations: Not Supported 00:19:03.007 UUID List: Not Supported 00:19:03.007 Multi-Domain Subsystem: Not Supported 00:19:03.007 Fixed Capacity Management: Not Supported 00:19:03.007 Variable Capacity Management: Not Supported 00:19:03.007 Delete Endurance Group: Not Supported 00:19:03.007 Delete NVM Set: Not Supported 00:19:03.007 Extended LBA Formats Supported: Supported 00:19:03.007 Flexible Data Placement Supported: Supported 00:19:03.007 00:19:03.007 Controller Memory Buffer Support 00:19:03.007 ================================ 00:19:03.007 Supported: No 00:19:03.007 00:19:03.007 Persistent Memory Region Support 00:19:03.007 ================================ 00:19:03.007 Supported: No 00:19:03.007 00:19:03.007 Admin Command Set Attributes 00:19:03.007 ============================ 00:19:03.007 Security Send/Receive: Not Supported 00:19:03.007 Format NVM: Supported 00:19:03.007 Firmware Activate/Download: Not Supported 00:19:03.007 Namespace Management: Supported 00:19:03.007 Device Self-Test: Not Supported 00:19:03.007 Directives: Supported 00:19:03.007 NVMe-MI: Not Supported 00:19:03.007 Virtualization Management: Not Supported 00:19:03.007 Doorbell Buffer Config: Supported 00:19:03.007 Get LBA Status Capability: Not Supported 00:19:03.007 Command & Feature Lockdown Capability: Not Supported 00:19:03.007 Abort Command Limit: 4 00:19:03.007 Async Event Request Limit: 4 00:19:03.007 Number of Firmware Slots: N/A 00:19:03.007 Firmware Slot 1 Read-Only: N/A 00:19:03.007 Firmware Activation Without Reset: N/A 00:19:03.007 Multiple Update Detection Support: N/A 00:19:03.007 Firmware Update Granularity: No Information Provided 00:19:03.007 Per-Namespace SMART Log: Yes 00:19:03.007 Asymmetric Namespace Access Log Page: Not Supported 00:19:03.007 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:19:03.007 Command Effects Log Page: Supported 00:19:03.007 Get Log Page Extended Data: Supported 00:19:03.007 Telemetry Log Pages: Not Supported 00:19:03.007 Persistent Event Log Pages: Not Supported 00:19:03.007 Supported Log Pages Log Page: May Support 00:19:03.007 Commands Supported & Effects Log Page: Not Supported 00:19:03.007 Feature Identifiers & Effects Log Page:May Support 00:19:03.007 NVMe-MI Commands & Effects Log Page: May Support 00:19:03.007 Data Area 4 for Telemetry Log: Not Supported 00:19:03.007 Error Log Page Entries Supported: 1 00:19:03.007 Keep Alive: Not Supported 00:19:03.007 00:19:03.007 NVM Command Set Attributes 00:19:03.007 ========================== 00:19:03.007 Submission Queue Entry Size 00:19:03.007 Max: 64 00:19:03.007 Min: 64 00:19:03.007 Completion Queue Entry Size 00:19:03.007 Max: 16 00:19:03.007 Min: 16 00:19:03.007 Number of Namespaces: 256 00:19:03.007 Compare Command: Supported 00:19:03.007 Write Uncorrectable Command: Not Supported 00:19:03.007 Dataset Management Command: Supported 00:19:03.007 Write Zeroes Command: Supported 00:19:03.007 Set Features Save Field: Supported 00:19:03.007 Reservations: Not Supported 00:19:03.007 Timestamp: Supported 00:19:03.007 Copy: Supported 00:19:03.007 Volatile Write Cache: Present 00:19:03.007 Atomic Write Unit (Normal): 1 00:19:03.007 Atomic Write Unit (PFail): 1 00:19:03.007 Atomic Compare & Write Unit: 1 00:19:03.007 Fused Compare & Write: Not Supported 00:19:03.007 Scatter-Gather List 00:19:03.007 SGL Command Set: Supported 00:19:03.007 SGL Keyed: Not Supported 00:19:03.007 SGL Bit Bucket Descriptor: Not Supported 00:19:03.007 SGL Metadata Pointer: Not Supported 00:19:03.007 Oversized SGL: Not Supported 00:19:03.007 SGL Metadata Address: Not Supported 00:19:03.007 SGL Offset: Not Supported 00:19:03.007 Transport SGL Data Block: Not Supported 00:19:03.007 Replay Protected Memory Block: Not Supported 00:19:03.007 00:19:03.007 Firmware Slot Information 00:19:03.007 ========================= 00:19:03.007 Active slot: 1 00:19:03.007 Slot 1 Firmware Revision: 1.0 00:19:03.007 00:19:03.007 00:19:03.007 Commands Supported and Effects 00:19:03.007 ============================== 00:19:03.007 Admin Commands 00:19:03.007 -------------- 00:19:03.007 Delete I/O Submission Queue (00h): Supported 00:19:03.007 Create I/O Submission Queue (01h): Supported 00:19:03.007 Get Log Page (02h): Supported 00:19:03.007 Delete I/O Completion Queue (04h): Supported 00:19:03.007 Create I/O Completion Queue (05h): Supported 00:19:03.007 Identify (06h): Supported 00:19:03.007 Abort (08h): Supported 00:19:03.007 Set Features (09h): Supported 00:19:03.007 Get Features (0Ah): Supported 00:19:03.007 Asynchronous Event Request (0Ch): Supported 00:19:03.007 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:03.007 Directive Send (19h): Supported 00:19:03.007 Directive Receive (1Ah): Supported 00:19:03.007 Virtualization Management (1Ch): Supported 00:19:03.007 Doorbell Buffer Config (7Ch): Supported 00:19:03.007 Format NVM (80h): Supported LBA-Change 00:19:03.007 I/O Commands 00:19:03.007 ------------ 00:19:03.007 Flush (00h): Supported LBA-Change 00:19:03.007 Write (01h): Supported LBA-Change 00:19:03.007 Read (02h): Supported 00:19:03.007 Compare (05h): Supported 00:19:03.007 Write Zeroes (08h): Supported LBA-Change 00:19:03.007 Dataset Management (09h): Supported LBA-Change 00:19:03.007 Unknown (0Ch): Supported 00:19:03.007 Unknown (12h): Supported 00:19:03.007 Copy (19h): Supported LBA-Change 00:19:03.007 Unknown (1Dh): Supported LBA-Change 00:19:03.007 00:19:03.007 Error Log 00:19:03.007 ========= 00:19:03.007 00:19:03.007 Arbitration 00:19:03.007 =========== 00:19:03.007 Arbitration Burst: no limit 00:19:03.007 00:19:03.007 Power Management 00:19:03.007 ================ 00:19:03.007 Number of Power States: 1 00:19:03.007 Current Power State: Power State #0 00:19:03.007 Power State #0: 00:19:03.007 Max Power: 25.00 W 00:19:03.007 Non-Operational State: Operational 00:19:03.007 Entry Latency: 16 microseconds 00:19:03.007 Exit Latency: 4 microseconds 00:19:03.007 Relative Read Throughput: 0 00:19:03.007 Relative Read Latency: 0 00:19:03.007 Relative Write Throughput: 0 00:19:03.007 Relative Write Latency: 0 00:19:03.007 Idle Power: Not Reported 00:19:03.007 Active Power: Not Reported 00:19:03.007 Non-Operational Permissive Mode: Not Supported 00:19:03.007 00:19:03.007 Health Information 00:19:03.007 ================== 00:19:03.007 Critical Warnings: 00:19:03.007 Available Spare Space: OK 00:19:03.007 Temperature: OK 00:19:03.007 Device Reliability: OK 00:19:03.007 Read Only: No 00:19:03.007 Volatile Memory Backup: OK 00:19:03.007 Current Temperature: 323 Kelvin (50 Celsius) 00:19:03.007 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:03.007 Available Spare: 0% 00:19:03.007 Available Spare Threshold: 0% 00:19:03.007 Life Percentage Used: 0% 00:19:03.007 Data Units Read: 873 00:19:03.007 Data Units Written: 767 00:19:03.008 Host Read Commands: 38895 00:19:03.008 Host Write Commands: 37485 00:19:03.008 Controller Busy Time: 0 minutes 00:19:03.008 Power Cycles: 0 00:19:03.008 Power On Hours: 0 hours 00:19:03.008 Unsafe Shutdowns: 0 00:19:03.008 Unrecoverable Media Errors: 0 00:19:03.008 Lifetime Error Log Entries: 0 00:19:03.008 Warning Temperature Time: 0 minutes 00:19:03.008 Critical Temperature Time: 0 minutes 00:19:03.008 00:19:03.008 Number of Queues 00:19:03.008 ================ 00:19:03.008 Number of I/O Submission Queues: 64 00:19:03.008 Number of I/O Completion Queues: 64 00:19:03.008 00:19:03.008 ZNS Specific Controller Data 00:19:03.008 ============================ 00:19:03.008 Zone Append Size Limit: 0 00:19:03.008 00:19:03.008 00:19:03.008 Active Namespaces 00:19:03.008 ================= 00:19:03.008 Namespace ID:1 00:19:03.008 Error Recovery Timeout: Unlimited 00:19:03.008 Command Set Identifier: NVM (00h) 00:19:03.008 Deallocate: Supported 00:19:03.008 Deallocated/Unwritten Error: Supported 00:19:03.008 Deallocated Read Value: All 0x00 00:19:03.008 Deallocate in Write Zeroes: Not Supported 00:19:03.008 Deallocated Guard Field: 0xFFFF 00:19:03.008 Flush: Supported 00:19:03.008 Reservation: Not Supported 00:19:03.008 Namespace Sharing Capabilities: Multiple Controllers 00:19:03.008 Size (in LBAs): 262144 (1GiB) 00:19:03.008 Capacity (in LBAs): 262144 (1GiB) 00:19:03.008 Utilization (in LBAs): 262144 (1GiB) 00:19:03.008 Thin Provisioning: Not Supported 00:19:03.008 Per-NS Atomic Units: No 00:19:03.008 Maximum Single Source Range Length: 128 00:19:03.008 Maximum Copy Length: 128 00:19:03.008 Maximum Source Range Count: 128 00:19:03.008 NGUID/EUI64 Never Reused: No 00:19:03.008 Namespace Write Protected: No 00:19:03.008 Endurance group ID: 1 00:19:03.008 Number of LBA Formats: 8 00:19:03.008 Current LBA Format: LBA Format #04 00:19:03.008 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.008 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.008 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.008 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.008 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.008 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.008 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.008 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.008 00:19:03.008 Get Feature FDP: 00:19:03.008 ================ 00:19:03.008 Enabled: Yes 00:19:03.008 FDP configuration index: 0 00:19:03.008 00:19:03.008 FDP configurations log page 00:19:03.008 =========================== 00:19:03.008 Number of FDP configurations: 1 00:19:03.008 Version: 0 00:19:03.008 Size: 112 00:19:03.008 FDP Configuration Descriptor: 0 00:19:03.008 Descriptor Size: 96 00:19:03.008 Reclaim Group Identifier format: 2 00:19:03.008 FDP Volatile Write Cache: Not Present 00:19:03.008 FDP Configuration: Valid 00:19:03.008 Vendor Specific Size: 0 00:19:03.008 Number of Reclaim Groups: 2 00:19:03.008 Number of Recalim Unit Handles: 8 00:19:03.008 Max Placement Identifiers: 128 00:19:03.008 Number of Namespaces Suppprted: 256 00:19:03.008 Reclaim unit Nominal Size: 6000000 bytes 00:19:03.008 Estimated Reclaim Unit Time Limit: Not Reported 00:19:03.008 RUH Desc #000: RUH Type: Initially Isolated 00:19:03.008 RUH Desc #001: RUH Type: Initially Isolated 00:19:03.008 RUH Desc #002: RUH Type: Initially Isolated 00:19:03.008 RUH Desc #003: RUH Type: Initially Isolated 00:19:03.008 RUH Desc #004: RUH Type: Initially Isolated 00:19:03.008 RUH Desc #005: RUH Type: Initially Isolated 00:19:03.008 RUH Desc #006: RUH Type: Initially Isolated 00:19:03.008 RUH Desc #007: RUH Type: Initially Isolated 00:19:03.008 00:19:03.008 FDP reclaim unit handle usage log page 00:19:03.008 ====================================== 00:19:03.008 Number of Reclaim Unit Handles: 8 00:19:03.008 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:19:03.008 RUH Usage Desc #001: RUH Attributes: Unused 00:19:03.008 RUH Usage Desc #002: RUH Attributes: Unused 00:19:03.008 RUH Usage Desc #003: RUH Attributes: Unused 00:19:03.008 RUH Usage Desc #004: RUH Attributes: Unused 00:19:03.008 RUH Usage Desc #005: RUH Attributes: Unused 00:19:03.008 RUH Usage Desc #006: RUH Attributes: Unused 00:19:03.008 RUH Usage Desc #007: RUH Attributes: Unused 00:19:03.008 00:19:03.008 FDP statistics log page 00:19:03.008 ======================= 00:19:03.008 Host bytes with metadata written: 501915648 00:19:03.008 Medi[2024-07-10 12:21:12.451703] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 69894 terminated unexpected 00:19:03.008 a bytes with metadata written: 501968896 00:19:03.008 Media bytes erased: 0 00:19:03.008 00:19:03.008 FDP events log page 00:19:03.008 =================== 00:19:03.008 Number of FDP events: 0 00:19:03.008 00:19:03.008 NVM Specific Namespace Data 00:19:03.008 =========================== 00:19:03.008 Logical Block Storage Tag Mask: 0 00:19:03.008 Protection Information Capabilities: 00:19:03.008 16b Guard Protection Information Storage Tag Support: No 00:19:03.008 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.008 Storage Tag Check Read Support: No 00:19:03.008 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.008 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.008 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.008 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.008 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.008 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.008 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.008 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.008 ===================================================== 00:19:03.008 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:03.008 ===================================================== 00:19:03.008 Controller Capabilities/Features 00:19:03.008 ================================ 00:19:03.008 Vendor ID: 1b36 00:19:03.008 Subsystem Vendor ID: 1af4 00:19:03.008 Serial Number: 12342 00:19:03.008 Model Number: QEMU NVMe Ctrl 00:19:03.008 Firmware Version: 8.0.0 00:19:03.008 Recommended Arb Burst: 6 00:19:03.008 IEEE OUI Identifier: 00 54 52 00:19:03.008 Multi-path I/O 00:19:03.008 May have multiple subsystem ports: No 00:19:03.008 May have multiple controllers: No 00:19:03.008 Associated with SR-IOV VF: No 00:19:03.008 Max Data Transfer Size: 524288 00:19:03.008 Max Number of Namespaces: 256 00:19:03.008 Max Number of I/O Queues: 64 00:19:03.008 NVMe Specification Version (VS): 1.4 00:19:03.008 NVMe Specification Version (Identify): 1.4 00:19:03.008 Maximum Queue Entries: 2048 00:19:03.008 Contiguous Queues Required: Yes 00:19:03.008 Arbitration Mechanisms Supported 00:19:03.008 Weighted Round Robin: Not Supported 00:19:03.008 Vendor Specific: Not Supported 00:19:03.008 Reset Timeout: 7500 ms 00:19:03.008 Doorbell Stride: 4 bytes 00:19:03.009 NVM Subsystem Reset: Not Supported 00:19:03.009 Command Sets Supported 00:19:03.009 NVM Command Set: Supported 00:19:03.009 Boot Partition: Not Supported 00:19:03.009 Memory Page Size Minimum: 4096 bytes 00:19:03.009 Memory Page Size Maximum: 65536 bytes 00:19:03.009 Persistent Memory Region: Not Supported 00:19:03.009 Optional Asynchronous Events Supported 00:19:03.009 Namespace Attribute Notices: Supported 00:19:03.009 Firmware Activation Notices: Not Supported 00:19:03.009 ANA Change Notices: Not Supported 00:19:03.009 PLE Aggregate Log Change Notices: Not Supported 00:19:03.009 LBA Status Info Alert Notices: Not Supported 00:19:03.009 EGE Aggregate Log Change Notices: Not Supported 00:19:03.009 Normal NVM Subsystem Shutdown event: Not Supported 00:19:03.009 Zone Descriptor Change Notices: Not Supported 00:19:03.009 Discovery Log Change Notices: Not Supported 00:19:03.009 Controller Attributes 00:19:03.009 128-bit Host Identifier: Not Supported 00:19:03.009 Non-Operational Permissive Mode: Not Supported 00:19:03.009 NVM Sets: Not Supported 00:19:03.009 Read Recovery Levels: Not Supported 00:19:03.009 Endurance Groups: Not Supported 00:19:03.009 Predictable Latency Mode: Not Supported 00:19:03.009 Traffic Based Keep ALive: Not Supported 00:19:03.009 Namespace Granularity: Not Supported 00:19:03.009 SQ Associations: Not Supported 00:19:03.009 UUID List: Not Supported 00:19:03.009 Multi-Domain Subsystem: Not Supported 00:19:03.009 Fixed Capacity Management: Not Supported 00:19:03.009 Variable Capacity Management: Not Supported 00:19:03.009 Delete Endurance Group: Not Supported 00:19:03.009 Delete NVM Set: Not Supported 00:19:03.009 Extended LBA Formats Supported: Supported 00:19:03.009 Flexible Data Placement Supported: Not Supported 00:19:03.009 00:19:03.009 Controller Memory Buffer Support 00:19:03.009 ================================ 00:19:03.009 Supported: No 00:19:03.009 00:19:03.009 Persistent Memory Region Support 00:19:03.009 ================================ 00:19:03.009 Supported: No 00:19:03.009 00:19:03.009 Admin Command Set Attributes 00:19:03.009 ============================ 00:19:03.009 Security Send/Receive: Not Supported 00:19:03.009 Format NVM: Supported 00:19:03.009 Firmware Activate/Download: Not Supported 00:19:03.009 Namespace Management: Supported 00:19:03.009 Device Self-Test: Not Supported 00:19:03.009 Directives: Supported 00:19:03.009 NVMe-MI: Not Supported 00:19:03.009 Virtualization Management: Not Supported 00:19:03.009 Doorbell Buffer Config: Supported 00:19:03.009 Get LBA Status Capability: Not Supported 00:19:03.009 Command & Feature Lockdown Capability: Not Supported 00:19:03.009 Abort Command Limit: 4 00:19:03.009 Async Event Request Limit: 4 00:19:03.009 Number of Firmware Slots: N/A 00:19:03.009 Firmware Slot 1 Read-Only: N/A 00:19:03.009 Firmware Activation Without Reset: N/A 00:19:03.009 Multiple Update Detection Support: N/A 00:19:03.009 Firmware Update Granularity: No Information Provided 00:19:03.009 Per-Namespace SMART Log: Yes 00:19:03.009 Asymmetric Namespace Access Log Page: Not Supported 00:19:03.009 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:19:03.009 Command Effects Log Page: Supported 00:19:03.009 Get Log Page Extended Data: Supported 00:19:03.009 Telemetry Log Pages: Not Supported 00:19:03.009 Persistent Event Log Pages: Not Supported 00:19:03.009 Supported Log Pages Log Page: May Support 00:19:03.009 Commands Supported & Effects Log Page: Not Supported 00:19:03.009 Feature Identifiers & Effects Log Page:May Support 00:19:03.009 NVMe-MI Commands & Effects Log Page: May Support 00:19:03.009 Data Area 4 for Telemetry Log: Not Supported 00:19:03.009 Error Log Page Entries Supported: 1 00:19:03.009 Keep Alive: Not Supported 00:19:03.009 00:19:03.009 NVM Command Set Attributes 00:19:03.009 ========================== 00:19:03.009 Submission Queue Entry Size 00:19:03.009 Max: 64 00:19:03.009 Min: 64 00:19:03.009 Completion Queue Entry Size 00:19:03.009 Max: 16 00:19:03.009 Min: 16 00:19:03.009 Number of Namespaces: 256 00:19:03.009 Compare Command: Supported 00:19:03.009 Write Uncorrectable Command: Not Supported 00:19:03.009 Dataset Management Command: Supported 00:19:03.009 Write Zeroes Command: Supported 00:19:03.009 Set Features Save Field: Supported 00:19:03.009 Reservations: Not Supported 00:19:03.009 Timestamp: Supported 00:19:03.009 Copy: Supported 00:19:03.009 Volatile Write Cache: Present 00:19:03.009 Atomic Write Unit (Normal): 1 00:19:03.009 Atomic Write Unit (PFail): 1 00:19:03.009 Atomic Compare & Write Unit: 1 00:19:03.009 Fused Compare & Write: Not Supported 00:19:03.009 Scatter-Gather List 00:19:03.009 SGL Command Set: Supported 00:19:03.009 SGL Keyed: Not Supported 00:19:03.009 SGL Bit Bucket Descriptor: Not Supported 00:19:03.009 SGL Metadata Pointer: Not Supported 00:19:03.009 Oversized SGL: Not Supported 00:19:03.009 SGL Metadata Address: Not Supported 00:19:03.009 SGL Offset: Not Supported 00:19:03.009 Transport SGL Data Block: Not Supported 00:19:03.009 Replay Protected Memory Block: Not Supported 00:19:03.009 00:19:03.009 Firmware Slot Information 00:19:03.009 ========================= 00:19:03.009 Active slot: 1 00:19:03.009 Slot 1 Firmware Revision: 1.0 00:19:03.009 00:19:03.009 00:19:03.009 Commands Supported and Effects 00:19:03.009 ============================== 00:19:03.009 Admin Commands 00:19:03.009 -------------- 00:19:03.009 Delete I/O Submission Queue (00h): Supported 00:19:03.009 Create I/O Submission Queue (01h): Supported 00:19:03.009 Get Log Page (02h): Supported 00:19:03.009 Delete I/O Completion Queue (04h): Supported 00:19:03.009 Create I/O Completion Queue (05h): Supported 00:19:03.009 Identify (06h): Supported 00:19:03.009 Abort (08h): Supported 00:19:03.009 Set Features (09h): Supported 00:19:03.009 Get Features (0Ah): Supported 00:19:03.009 Asynchronous Event Request (0Ch): Supported 00:19:03.009 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:03.009 Directive Send (19h): Supported 00:19:03.009 Directive Receive (1Ah): Supported 00:19:03.009 Virtualization Management (1Ch): Supported 00:19:03.009 Doorbell Buffer Config (7Ch): Supported 00:19:03.009 Format NVM (80h): Supported LBA-Change 00:19:03.009 I/O Commands 00:19:03.009 ------------ 00:19:03.009 Flush (00h): Supported LBA-Change 00:19:03.009 Write (01h): Supported LBA-Change 00:19:03.009 Read (02h): Supported 00:19:03.009 Compare (05h): Supported 00:19:03.009 Write Zeroes (08h): Supported LBA-Change 00:19:03.009 Dataset Management (09h): Supported LBA-Change 00:19:03.009 Unknown (0Ch): Supported 00:19:03.009 Unknown (12h): Supported 00:19:03.010 Copy (19h): Supported LBA-Change 00:19:03.010 Unknown (1Dh): Supported LBA-Change 00:19:03.010 00:19:03.010 Error Log 00:19:03.010 ========= 00:19:03.010 00:19:03.010 Arbitration 00:19:03.010 =========== 00:19:03.010 Arbitration Burst: no limit 00:19:03.010 00:19:03.010 Power Management 00:19:03.010 ================ 00:19:03.010 Number of Power States: 1 00:19:03.010 Current Power State: Power State #0 00:19:03.010 Power State #0: 00:19:03.010 Max Power: 25.00 W 00:19:03.010 Non-Operational State: Operational 00:19:03.010 Entry Latency: 16 microseconds 00:19:03.010 Exit Latency: 4 microseconds 00:19:03.010 Relative Read Throughput: 0 00:19:03.010 Relative Read Latency: 0 00:19:03.010 Relative Write Throughput: 0 00:19:03.010 Relative Write Latency: 0 00:19:03.010 Idle Power: Not Reported 00:19:03.010 Active Power: Not Reported 00:19:03.010 Non-Operational Permissive Mode: Not Supported 00:19:03.010 00:19:03.010 Health Information 00:19:03.010 ================== 00:19:03.010 Critical Warnings: 00:19:03.010 Available Spare Space: OK 00:19:03.010 Temperature: OK 00:19:03.010 Device Reliability: OK 00:19:03.010 Read Only: No 00:19:03.010 Volatile Memory Backup: OK 00:19:03.010 Current Temperature: 323 Kelvin (50 Celsius) 00:19:03.010 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:03.010 Available Spare: 0% 00:19:03.010 Available Spare Threshold: 0% 00:19:03.010 Life Percentage Used: 0% 00:19:03.010 Data Units Read: 2462 00:19:03.010 Data Units Written: 2142 00:19:03.010 Host Read Commands: 115217 00:19:03.010 Host Write Commands: 110987 00:19:03.010 Controller Busy Time: 0 minutes 00:19:03.010 Power Cycles: 0 00:19:03.010 Power On Hours: 0 hours 00:19:03.010 Unsafe Shutdowns: 0 00:19:03.010 Unrecoverable Media Errors: 0 00:19:03.010 Lifetime Error Log Entries: 0 00:19:03.010 Warning Temperature Time: 0 minutes 00:19:03.010 Critical Temperature Time: 0 minutes 00:19:03.010 00:19:03.010 Number of Queues 00:19:03.010 ================ 00:19:03.010 Number of I/O Submission Queues: 64 00:19:03.010 Number of I/O Completion Queues: 64 00:19:03.010 00:19:03.010 ZNS Specific Controller Data 00:19:03.010 ============================ 00:19:03.010 Zone Append Size Limit: 0 00:19:03.010 00:19:03.010 00:19:03.010 Active Namespaces 00:19:03.010 ================= 00:19:03.010 Namespace ID:1 00:19:03.010 Error Recovery Timeout: Unlimited 00:19:03.010 Command Set Identifier: NVM (00h) 00:19:03.010 Deallocate: Supported 00:19:03.010 Deallocated/Unwritten Error: Supported 00:19:03.010 Deallocated Read Value: All 0x00 00:19:03.010 Deallocate in Write Zeroes: Not Supported 00:19:03.010 Deallocated Guard Field: 0xFFFF 00:19:03.010 Flush: Supported 00:19:03.010 Reservation: Not Supported 00:19:03.010 Namespace Sharing Capabilities: Private 00:19:03.010 Size (in LBAs): 1048576 (4GiB) 00:19:03.010 Capacity (in LBAs): 1048576 (4GiB) 00:19:03.010 Utilization (in LBAs): 1048576 (4GiB) 00:19:03.010 Thin Provisioning: Not Supported 00:19:03.010 Per-NS Atomic Units: No 00:19:03.010 Maximum Single Source Range Length: 128 00:19:03.010 Maximum Copy Length: 128 00:19:03.010 Maximum Source Range Count: 128 00:19:03.010 NGUID/EUI64 Never Reused: No 00:19:03.010 Namespace Write Protected: No 00:19:03.010 Number of LBA Formats: 8 00:19:03.010 Current LBA Format: LBA Format #04 00:19:03.010 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.010 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.010 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.010 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.010 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.010 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.010 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.010 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.010 00:19:03.010 NVM Specific Namespace Data 00:19:03.010 =========================== 00:19:03.010 Logical Block Storage Tag Mask: 0 00:19:03.010 Protection Information Capabilities: 00:19:03.010 16b Guard Protection Information Storage Tag Support: No 00:19:03.010 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.010 Storage Tag Check Read Support: No 00:19:03.010 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Namespace ID:2 00:19:03.010 Error Recovery Timeout: Unlimited 00:19:03.010 Command Set Identifier: NVM (00h) 00:19:03.010 Deallocate: Supported 00:19:03.010 Deallocated/Unwritten Error: Supported 00:19:03.010 Deallocated Read Value: All 0x00 00:19:03.010 Deallocate in Write Zeroes: Not Supported 00:19:03.010 Deallocated Guard Field: 0xFFFF 00:19:03.010 Flush: Supported 00:19:03.010 Reservation: Not Supported 00:19:03.010 Namespace Sharing Capabilities: Private 00:19:03.010 Size (in LBAs): 1048576 (4GiB) 00:19:03.010 Capacity (in LBAs): 1048576 (4GiB) 00:19:03.010 Utilization (in LBAs): 1048576 (4GiB) 00:19:03.010 Thin Provisioning: Not Supported 00:19:03.010 Per-NS Atomic Units: No 00:19:03.010 Maximum Single Source Range Length: 128 00:19:03.010 Maximum Copy Length: 128 00:19:03.010 Maximum Source Range Count: 128 00:19:03.010 NGUID/EUI64 Never Reused: No 00:19:03.010 Namespace Write Protected: No 00:19:03.010 Number of LBA Formats: 8 00:19:03.010 Current LBA Format: LBA Format #04 00:19:03.010 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.010 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.010 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.010 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.010 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.010 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.010 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.010 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.010 00:19:03.010 NVM Specific Namespace Data 00:19:03.010 =========================== 00:19:03.010 Logical Block Storage Tag Mask: 0 00:19:03.010 Protection Information Capabilities: 00:19:03.010 16b Guard Protection Information Storage Tag Support: No 00:19:03.010 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.010 Storage Tag Check Read Support: No 00:19:03.010 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.010 Namespace ID:3 00:19:03.010 Error Recovery Timeout: Unlimited 00:19:03.010 Command Set Identifier: NVM (00h) 00:19:03.010 Deallocate: Supported 00:19:03.010 Deallocated/Unwritten Error: Supported 00:19:03.010 Deallocated Read Value: All 0x00 00:19:03.010 Deallocate in Write Zeroes: Not Supported 00:19:03.010 Deallocated Guard Field: 0xFFFF 00:19:03.010 Flush: Supported 00:19:03.010 Reservation: Not Supported 00:19:03.010 Namespace Sharing Capabilities: Private 00:19:03.010 Size (in LBAs): 1048576 (4GiB) 00:19:03.271 Capacity (in LBAs): 1048576 (4GiB) 00:19:03.271 Utilization (in LBAs): 1048576 (4GiB) 00:19:03.271 Thin Provisioning: Not Supported 00:19:03.271 Per-NS Atomic Units: No 00:19:03.271 Maximum Single Source Range Length: 128 00:19:03.271 Maximum Copy Length: 128 00:19:03.271 Maximum Source Range Count: 128 00:19:03.271 NGUID/EUI64 Never Reused: No 00:19:03.271 Namespace Write Protected: No 00:19:03.271 Number of LBA Formats: 8 00:19:03.271 Current LBA Format: LBA Format #04 00:19:03.271 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.271 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.271 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.271 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.271 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.271 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.271 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.271 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.271 00:19:03.271 NVM Specific Namespace Data 00:19:03.271 =========================== 00:19:03.271 Logical Block Storage Tag Mask: 0 00:19:03.271 Protection Information Capabilities: 00:19:03.271 16b Guard Protection Information Storage Tag Support: No 00:19:03.271 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.271 Storage Tag Check Read Support: No 00:19:03.271 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.271 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.271 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.271 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.271 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.271 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.271 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.271 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.271 12:21:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:03.271 12:21:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:19:03.271 ===================================================== 00:19:03.271 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:03.271 ===================================================== 00:19:03.271 Controller Capabilities/Features 00:19:03.271 ================================ 00:19:03.271 Vendor ID: 1b36 00:19:03.271 Subsystem Vendor ID: 1af4 00:19:03.271 Serial Number: 12340 00:19:03.271 Model Number: QEMU NVMe Ctrl 00:19:03.271 Firmware Version: 8.0.0 00:19:03.271 Recommended Arb Burst: 6 00:19:03.271 IEEE OUI Identifier: 00 54 52 00:19:03.271 Multi-path I/O 00:19:03.271 May have multiple subsystem ports: No 00:19:03.271 May have multiple controllers: No 00:19:03.271 Associated with SR-IOV VF: No 00:19:03.271 Max Data Transfer Size: 524288 00:19:03.271 Max Number of Namespaces: 256 00:19:03.271 Max Number of I/O Queues: 64 00:19:03.271 NVMe Specification Version (VS): 1.4 00:19:03.271 NVMe Specification Version (Identify): 1.4 00:19:03.271 Maximum Queue Entries: 2048 00:19:03.271 Contiguous Queues Required: Yes 00:19:03.271 Arbitration Mechanisms Supported 00:19:03.271 Weighted Round Robin: Not Supported 00:19:03.271 Vendor Specific: Not Supported 00:19:03.271 Reset Timeout: 7500 ms 00:19:03.271 Doorbell Stride: 4 bytes 00:19:03.271 NVM Subsystem Reset: Not Supported 00:19:03.271 Command Sets Supported 00:19:03.271 NVM Command Set: Supported 00:19:03.271 Boot Partition: Not Supported 00:19:03.271 Memory Page Size Minimum: 4096 bytes 00:19:03.271 Memory Page Size Maximum: 65536 bytes 00:19:03.271 Persistent Memory Region: Not Supported 00:19:03.271 Optional Asynchronous Events Supported 00:19:03.271 Namespace Attribute Notices: Supported 00:19:03.271 Firmware Activation Notices: Not Supported 00:19:03.271 ANA Change Notices: Not Supported 00:19:03.271 PLE Aggregate Log Change Notices: Not Supported 00:19:03.271 LBA Status Info Alert Notices: Not Supported 00:19:03.271 EGE Aggregate Log Change Notices: Not Supported 00:19:03.271 Normal NVM Subsystem Shutdown event: Not Supported 00:19:03.271 Zone Descriptor Change Notices: Not Supported 00:19:03.271 Discovery Log Change Notices: Not Supported 00:19:03.271 Controller Attributes 00:19:03.271 128-bit Host Identifier: Not Supported 00:19:03.271 Non-Operational Permissive Mode: Not Supported 00:19:03.271 NVM Sets: Not Supported 00:19:03.271 Read Recovery Levels: Not Supported 00:19:03.271 Endurance Groups: Not Supported 00:19:03.271 Predictable Latency Mode: Not Supported 00:19:03.271 Traffic Based Keep ALive: Not Supported 00:19:03.271 Namespace Granularity: Not Supported 00:19:03.271 SQ Associations: Not Supported 00:19:03.271 UUID List: Not Supported 00:19:03.271 Multi-Domain Subsystem: Not Supported 00:19:03.271 Fixed Capacity Management: Not Supported 00:19:03.271 Variable Capacity Management: Not Supported 00:19:03.271 Delete Endurance Group: Not Supported 00:19:03.271 Delete NVM Set: Not Supported 00:19:03.271 Extended LBA Formats Supported: Supported 00:19:03.271 Flexible Data Placement Supported: Not Supported 00:19:03.271 00:19:03.271 Controller Memory Buffer Support 00:19:03.271 ================================ 00:19:03.271 Supported: No 00:19:03.271 00:19:03.271 Persistent Memory Region Support 00:19:03.271 ================================ 00:19:03.271 Supported: No 00:19:03.271 00:19:03.271 Admin Command Set Attributes 00:19:03.271 ============================ 00:19:03.271 Security Send/Receive: Not Supported 00:19:03.271 Format NVM: Supported 00:19:03.271 Firmware Activate/Download: Not Supported 00:19:03.271 Namespace Management: Supported 00:19:03.271 Device Self-Test: Not Supported 00:19:03.271 Directives: Supported 00:19:03.271 NVMe-MI: Not Supported 00:19:03.271 Virtualization Management: Not Supported 00:19:03.271 Doorbell Buffer Config: Supported 00:19:03.271 Get LBA Status Capability: Not Supported 00:19:03.271 Command & Feature Lockdown Capability: Not Supported 00:19:03.271 Abort Command Limit: 4 00:19:03.271 Async Event Request Limit: 4 00:19:03.271 Number of Firmware Slots: N/A 00:19:03.271 Firmware Slot 1 Read-Only: N/A 00:19:03.271 Firmware Activation Without Reset: N/A 00:19:03.271 Multiple Update Detection Support: N/A 00:19:03.271 Firmware Update Granularity: No Information Provided 00:19:03.271 Per-Namespace SMART Log: Yes 00:19:03.271 Asymmetric Namespace Access Log Page: Not Supported 00:19:03.271 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:03.271 Command Effects Log Page: Supported 00:19:03.271 Get Log Page Extended Data: Supported 00:19:03.271 Telemetry Log Pages: Not Supported 00:19:03.271 Persistent Event Log Pages: Not Supported 00:19:03.271 Supported Log Pages Log Page: May Support 00:19:03.271 Commands Supported & Effects Log Page: Not Supported 00:19:03.271 Feature Identifiers & Effects Log Page:May Support 00:19:03.271 NVMe-MI Commands & Effects Log Page: May Support 00:19:03.271 Data Area 4 for Telemetry Log: Not Supported 00:19:03.271 Error Log Page Entries Supported: 1 00:19:03.271 Keep Alive: Not Supported 00:19:03.271 00:19:03.271 NVM Command Set Attributes 00:19:03.271 ========================== 00:19:03.271 Submission Queue Entry Size 00:19:03.271 Max: 64 00:19:03.271 Min: 64 00:19:03.271 Completion Queue Entry Size 00:19:03.271 Max: 16 00:19:03.271 Min: 16 00:19:03.271 Number of Namespaces: 256 00:19:03.271 Compare Command: Supported 00:19:03.271 Write Uncorrectable Command: Not Supported 00:19:03.271 Dataset Management Command: Supported 00:19:03.271 Write Zeroes Command: Supported 00:19:03.271 Set Features Save Field: Supported 00:19:03.271 Reservations: Not Supported 00:19:03.271 Timestamp: Supported 00:19:03.271 Copy: Supported 00:19:03.271 Volatile Write Cache: Present 00:19:03.271 Atomic Write Unit (Normal): 1 00:19:03.271 Atomic Write Unit (PFail): 1 00:19:03.271 Atomic Compare & Write Unit: 1 00:19:03.271 Fused Compare & Write: Not Supported 00:19:03.271 Scatter-Gather List 00:19:03.271 SGL Command Set: Supported 00:19:03.271 SGL Keyed: Not Supported 00:19:03.271 SGL Bit Bucket Descriptor: Not Supported 00:19:03.271 SGL Metadata Pointer: Not Supported 00:19:03.271 Oversized SGL: Not Supported 00:19:03.271 SGL Metadata Address: Not Supported 00:19:03.271 SGL Offset: Not Supported 00:19:03.271 Transport SGL Data Block: Not Supported 00:19:03.271 Replay Protected Memory Block: Not Supported 00:19:03.271 00:19:03.271 Firmware Slot Information 00:19:03.271 ========================= 00:19:03.271 Active slot: 1 00:19:03.271 Slot 1 Firmware Revision: 1.0 00:19:03.271 00:19:03.271 00:19:03.271 Commands Supported and Effects 00:19:03.271 ============================== 00:19:03.271 Admin Commands 00:19:03.271 -------------- 00:19:03.271 Delete I/O Submission Queue (00h): Supported 00:19:03.271 Create I/O Submission Queue (01h): Supported 00:19:03.271 Get Log Page (02h): Supported 00:19:03.271 Delete I/O Completion Queue (04h): Supported 00:19:03.272 Create I/O Completion Queue (05h): Supported 00:19:03.272 Identify (06h): Supported 00:19:03.272 Abort (08h): Supported 00:19:03.272 Set Features (09h): Supported 00:19:03.272 Get Features (0Ah): Supported 00:19:03.272 Asynchronous Event Request (0Ch): Supported 00:19:03.272 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:03.272 Directive Send (19h): Supported 00:19:03.272 Directive Receive (1Ah): Supported 00:19:03.272 Virtualization Management (1Ch): Supported 00:19:03.272 Doorbell Buffer Config (7Ch): Supported 00:19:03.272 Format NVM (80h): Supported LBA-Change 00:19:03.272 I/O Commands 00:19:03.272 ------------ 00:19:03.272 Flush (00h): Supported LBA-Change 00:19:03.272 Write (01h): Supported LBA-Change 00:19:03.272 Read (02h): Supported 00:19:03.272 Compare (05h): Supported 00:19:03.272 Write Zeroes (08h): Supported LBA-Change 00:19:03.272 Dataset Management (09h): Supported LBA-Change 00:19:03.272 Unknown (0Ch): Supported 00:19:03.272 Unknown (12h): Supported 00:19:03.272 Copy (19h): Supported LBA-Change 00:19:03.272 Unknown (1Dh): Supported LBA-Change 00:19:03.272 00:19:03.272 Error Log 00:19:03.272 ========= 00:19:03.272 00:19:03.272 Arbitration 00:19:03.272 =========== 00:19:03.272 Arbitration Burst: no limit 00:19:03.272 00:19:03.272 Power Management 00:19:03.272 ================ 00:19:03.272 Number of Power States: 1 00:19:03.272 Current Power State: Power State #0 00:19:03.272 Power State #0: 00:19:03.272 Max Power: 25.00 W 00:19:03.272 Non-Operational State: Operational 00:19:03.272 Entry Latency: 16 microseconds 00:19:03.272 Exit Latency: 4 microseconds 00:19:03.272 Relative Read Throughput: 0 00:19:03.272 Relative Read Latency: 0 00:19:03.272 Relative Write Throughput: 0 00:19:03.272 Relative Write Latency: 0 00:19:03.532 Idle Power: Not Reported 00:19:03.532 Active Power: Not Reported 00:19:03.532 Non-Operational Permissive Mode: Not Supported 00:19:03.532 00:19:03.532 Health Information 00:19:03.532 ================== 00:19:03.532 Critical Warnings: 00:19:03.532 Available Spare Space: OK 00:19:03.532 Temperature: OK 00:19:03.532 Device Reliability: OK 00:19:03.532 Read Only: No 00:19:03.532 Volatile Memory Backup: OK 00:19:03.532 Current Temperature: 323 Kelvin (50 Celsius) 00:19:03.532 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:03.532 Available Spare: 0% 00:19:03.532 Available Spare Threshold: 0% 00:19:03.532 Life Percentage Used: 0% 00:19:03.532 Data Units Read: 1156 00:19:03.532 Data Units Written: 983 00:19:03.532 Host Read Commands: 54308 00:19:03.532 Host Write Commands: 52748 00:19:03.532 Controller Busy Time: 0 minutes 00:19:03.532 Power Cycles: 0 00:19:03.532 Power On Hours: 0 hours 00:19:03.532 Unsafe Shutdowns: 0 00:19:03.532 Unrecoverable Media Errors: 0 00:19:03.532 Lifetime Error Log Entries: 0 00:19:03.532 Warning Temperature Time: 0 minutes 00:19:03.532 Critical Temperature Time: 0 minutes 00:19:03.532 00:19:03.532 Number of Queues 00:19:03.532 ================ 00:19:03.532 Number of I/O Submission Queues: 64 00:19:03.532 Number of I/O Completion Queues: 64 00:19:03.532 00:19:03.532 ZNS Specific Controller Data 00:19:03.532 ============================ 00:19:03.532 Zone Append Size Limit: 0 00:19:03.532 00:19:03.532 00:19:03.532 Active Namespaces 00:19:03.532 ================= 00:19:03.532 Namespace ID:1 00:19:03.532 Error Recovery Timeout: Unlimited 00:19:03.532 Command Set Identifier: NVM (00h) 00:19:03.532 Deallocate: Supported 00:19:03.532 Deallocated/Unwritten Error: Supported 00:19:03.532 Deallocated Read Value: All 0x00 00:19:03.532 Deallocate in Write Zeroes: Not Supported 00:19:03.532 Deallocated Guard Field: 0xFFFF 00:19:03.532 Flush: Supported 00:19:03.532 Reservation: Not Supported 00:19:03.532 Metadata Transferred as: Separate Metadata Buffer 00:19:03.532 Namespace Sharing Capabilities: Private 00:19:03.532 Size (in LBAs): 1548666 (5GiB) 00:19:03.532 Capacity (in LBAs): 1548666 (5GiB) 00:19:03.532 Utilization (in LBAs): 1548666 (5GiB) 00:19:03.532 Thin Provisioning: Not Supported 00:19:03.532 Per-NS Atomic Units: No 00:19:03.532 Maximum Single Source Range Length: 128 00:19:03.532 Maximum Copy Length: 128 00:19:03.532 Maximum Source Range Count: 128 00:19:03.532 NGUID/EUI64 Never Reused: No 00:19:03.532 Namespace Write Protected: No 00:19:03.532 Number of LBA Formats: 8 00:19:03.532 Current LBA Format: LBA Format #07 00:19:03.532 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.532 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.532 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.532 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.532 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.532 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.532 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.532 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.532 00:19:03.532 NVM Specific Namespace Data 00:19:03.532 =========================== 00:19:03.532 Logical Block Storage Tag Mask: 0 00:19:03.532 Protection Information Capabilities: 00:19:03.532 16b Guard Protection Information Storage Tag Support: No 00:19:03.532 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.532 Storage Tag Check Read Support: No 00:19:03.532 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.532 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.532 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.532 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.532 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.532 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.532 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.532 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.532 12:21:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:03.532 12:21:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:19:03.790 ===================================================== 00:19:03.790 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:03.790 ===================================================== 00:19:03.790 Controller Capabilities/Features 00:19:03.790 ================================ 00:19:03.790 Vendor ID: 1b36 00:19:03.790 Subsystem Vendor ID: 1af4 00:19:03.790 Serial Number: 12341 00:19:03.790 Model Number: QEMU NVMe Ctrl 00:19:03.790 Firmware Version: 8.0.0 00:19:03.790 Recommended Arb Burst: 6 00:19:03.790 IEEE OUI Identifier: 00 54 52 00:19:03.790 Multi-path I/O 00:19:03.790 May have multiple subsystem ports: No 00:19:03.790 May have multiple controllers: No 00:19:03.790 Associated with SR-IOV VF: No 00:19:03.790 Max Data Transfer Size: 524288 00:19:03.790 Max Number of Namespaces: 256 00:19:03.790 Max Number of I/O Queues: 64 00:19:03.790 NVMe Specification Version (VS): 1.4 00:19:03.790 NVMe Specification Version (Identify): 1.4 00:19:03.790 Maximum Queue Entries: 2048 00:19:03.790 Contiguous Queues Required: Yes 00:19:03.790 Arbitration Mechanisms Supported 00:19:03.790 Weighted Round Robin: Not Supported 00:19:03.790 Vendor Specific: Not Supported 00:19:03.790 Reset Timeout: 7500 ms 00:19:03.790 Doorbell Stride: 4 bytes 00:19:03.790 NVM Subsystem Reset: Not Supported 00:19:03.790 Command Sets Supported 00:19:03.790 NVM Command Set: Supported 00:19:03.790 Boot Partition: Not Supported 00:19:03.790 Memory Page Size Minimum: 4096 bytes 00:19:03.790 Memory Page Size Maximum: 65536 bytes 00:19:03.790 Persistent Memory Region: Not Supported 00:19:03.790 Optional Asynchronous Events Supported 00:19:03.790 Namespace Attribute Notices: Supported 00:19:03.790 Firmware Activation Notices: Not Supported 00:19:03.790 ANA Change Notices: Not Supported 00:19:03.790 PLE Aggregate Log Change Notices: Not Supported 00:19:03.790 LBA Status Info Alert Notices: Not Supported 00:19:03.790 EGE Aggregate Log Change Notices: Not Supported 00:19:03.790 Normal NVM Subsystem Shutdown event: Not Supported 00:19:03.790 Zone Descriptor Change Notices: Not Supported 00:19:03.790 Discovery Log Change Notices: Not Supported 00:19:03.790 Controller Attributes 00:19:03.790 128-bit Host Identifier: Not Supported 00:19:03.790 Non-Operational Permissive Mode: Not Supported 00:19:03.790 NVM Sets: Not Supported 00:19:03.790 Read Recovery Levels: Not Supported 00:19:03.790 Endurance Groups: Not Supported 00:19:03.790 Predictable Latency Mode: Not Supported 00:19:03.790 Traffic Based Keep ALive: Not Supported 00:19:03.790 Namespace Granularity: Not Supported 00:19:03.790 SQ Associations: Not Supported 00:19:03.790 UUID List: Not Supported 00:19:03.790 Multi-Domain Subsystem: Not Supported 00:19:03.790 Fixed Capacity Management: Not Supported 00:19:03.790 Variable Capacity Management: Not Supported 00:19:03.790 Delete Endurance Group: Not Supported 00:19:03.790 Delete NVM Set: Not Supported 00:19:03.790 Extended LBA Formats Supported: Supported 00:19:03.790 Flexible Data Placement Supported: Not Supported 00:19:03.790 00:19:03.790 Controller Memory Buffer Support 00:19:03.790 ================================ 00:19:03.790 Supported: No 00:19:03.790 00:19:03.790 Persistent Memory Region Support 00:19:03.790 ================================ 00:19:03.790 Supported: No 00:19:03.790 00:19:03.790 Admin Command Set Attributes 00:19:03.790 ============================ 00:19:03.790 Security Send/Receive: Not Supported 00:19:03.790 Format NVM: Supported 00:19:03.790 Firmware Activate/Download: Not Supported 00:19:03.790 Namespace Management: Supported 00:19:03.790 Device Self-Test: Not Supported 00:19:03.790 Directives: Supported 00:19:03.790 NVMe-MI: Not Supported 00:19:03.790 Virtualization Management: Not Supported 00:19:03.790 Doorbell Buffer Config: Supported 00:19:03.790 Get LBA Status Capability: Not Supported 00:19:03.790 Command & Feature Lockdown Capability: Not Supported 00:19:03.790 Abort Command Limit: 4 00:19:03.790 Async Event Request Limit: 4 00:19:03.790 Number of Firmware Slots: N/A 00:19:03.790 Firmware Slot 1 Read-Only: N/A 00:19:03.790 Firmware Activation Without Reset: N/A 00:19:03.790 Multiple Update Detection Support: N/A 00:19:03.790 Firmware Update Granularity: No Information Provided 00:19:03.790 Per-Namespace SMART Log: Yes 00:19:03.790 Asymmetric Namespace Access Log Page: Not Supported 00:19:03.790 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:19:03.790 Command Effects Log Page: Supported 00:19:03.790 Get Log Page Extended Data: Supported 00:19:03.790 Telemetry Log Pages: Not Supported 00:19:03.790 Persistent Event Log Pages: Not Supported 00:19:03.790 Supported Log Pages Log Page: May Support 00:19:03.790 Commands Supported & Effects Log Page: Not Supported 00:19:03.790 Feature Identifiers & Effects Log Page:May Support 00:19:03.790 NVMe-MI Commands & Effects Log Page: May Support 00:19:03.790 Data Area 4 for Telemetry Log: Not Supported 00:19:03.790 Error Log Page Entries Supported: 1 00:19:03.790 Keep Alive: Not Supported 00:19:03.790 00:19:03.790 NVM Command Set Attributes 00:19:03.790 ========================== 00:19:03.790 Submission Queue Entry Size 00:19:03.790 Max: 64 00:19:03.790 Min: 64 00:19:03.790 Completion Queue Entry Size 00:19:03.790 Max: 16 00:19:03.790 Min: 16 00:19:03.790 Number of Namespaces: 256 00:19:03.790 Compare Command: Supported 00:19:03.790 Write Uncorrectable Command: Not Supported 00:19:03.790 Dataset Management Command: Supported 00:19:03.790 Write Zeroes Command: Supported 00:19:03.790 Set Features Save Field: Supported 00:19:03.790 Reservations: Not Supported 00:19:03.790 Timestamp: Supported 00:19:03.790 Copy: Supported 00:19:03.790 Volatile Write Cache: Present 00:19:03.790 Atomic Write Unit (Normal): 1 00:19:03.790 Atomic Write Unit (PFail): 1 00:19:03.790 Atomic Compare & Write Unit: 1 00:19:03.790 Fused Compare & Write: Not Supported 00:19:03.790 Scatter-Gather List 00:19:03.790 SGL Command Set: Supported 00:19:03.790 SGL Keyed: Not Supported 00:19:03.790 SGL Bit Bucket Descriptor: Not Supported 00:19:03.790 SGL Metadata Pointer: Not Supported 00:19:03.790 Oversized SGL: Not Supported 00:19:03.790 SGL Metadata Address: Not Supported 00:19:03.790 SGL Offset: Not Supported 00:19:03.790 Transport SGL Data Block: Not Supported 00:19:03.790 Replay Protected Memory Block: Not Supported 00:19:03.790 00:19:03.790 Firmware Slot Information 00:19:03.790 ========================= 00:19:03.790 Active slot: 1 00:19:03.790 Slot 1 Firmware Revision: 1.0 00:19:03.790 00:19:03.790 00:19:03.790 Commands Supported and Effects 00:19:03.790 ============================== 00:19:03.790 Admin Commands 00:19:03.790 -------------- 00:19:03.790 Delete I/O Submission Queue (00h): Supported 00:19:03.790 Create I/O Submission Queue (01h): Supported 00:19:03.790 Get Log Page (02h): Supported 00:19:03.790 Delete I/O Completion Queue (04h): Supported 00:19:03.790 Create I/O Completion Queue (05h): Supported 00:19:03.790 Identify (06h): Supported 00:19:03.790 Abort (08h): Supported 00:19:03.790 Set Features (09h): Supported 00:19:03.790 Get Features (0Ah): Supported 00:19:03.790 Asynchronous Event Request (0Ch): Supported 00:19:03.790 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:03.790 Directive Send (19h): Supported 00:19:03.790 Directive Receive (1Ah): Supported 00:19:03.790 Virtualization Management (1Ch): Supported 00:19:03.791 Doorbell Buffer Config (7Ch): Supported 00:19:03.791 Format NVM (80h): Supported LBA-Change 00:19:03.791 I/O Commands 00:19:03.791 ------------ 00:19:03.791 Flush (00h): Supported LBA-Change 00:19:03.791 Write (01h): Supported LBA-Change 00:19:03.791 Read (02h): Supported 00:19:03.791 Compare (05h): Supported 00:19:03.791 Write Zeroes (08h): Supported LBA-Change 00:19:03.791 Dataset Management (09h): Supported LBA-Change 00:19:03.791 Unknown (0Ch): Supported 00:19:03.791 Unknown (12h): Supported 00:19:03.791 Copy (19h): Supported LBA-Change 00:19:03.791 Unknown (1Dh): Supported LBA-Change 00:19:03.791 00:19:03.791 Error Log 00:19:03.791 ========= 00:19:03.791 00:19:03.791 Arbitration 00:19:03.791 =========== 00:19:03.791 Arbitration Burst: no limit 00:19:03.791 00:19:03.791 Power Management 00:19:03.791 ================ 00:19:03.791 Number of Power States: 1 00:19:03.791 Current Power State: Power State #0 00:19:03.791 Power State #0: 00:19:03.791 Max Power: 25.00 W 00:19:03.791 Non-Operational State: Operational 00:19:03.791 Entry Latency: 16 microseconds 00:19:03.791 Exit Latency: 4 microseconds 00:19:03.791 Relative Read Throughput: 0 00:19:03.791 Relative Read Latency: 0 00:19:03.791 Relative Write Throughput: 0 00:19:03.791 Relative Write Latency: 0 00:19:03.791 Idle Power: Not Reported 00:19:03.791 Active Power: Not Reported 00:19:03.791 Non-Operational Permissive Mode: Not Supported 00:19:03.791 00:19:03.791 Health Information 00:19:03.791 ================== 00:19:03.791 Critical Warnings: 00:19:03.791 Available Spare Space: OK 00:19:03.791 Temperature: OK 00:19:03.791 Device Reliability: OK 00:19:03.791 Read Only: No 00:19:03.791 Volatile Memory Backup: OK 00:19:03.791 Current Temperature: 323 Kelvin (50 Celsius) 00:19:03.791 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:03.791 Available Spare: 0% 00:19:03.791 Available Spare Threshold: 0% 00:19:03.791 Life Percentage Used: 0% 00:19:03.791 Data Units Read: 859 00:19:03.791 Data Units Written: 701 00:19:03.791 Host Read Commands: 39261 00:19:03.791 Host Write Commands: 36847 00:19:03.791 Controller Busy Time: 0 minutes 00:19:03.791 Power Cycles: 0 00:19:03.791 Power On Hours: 0 hours 00:19:03.791 Unsafe Shutdowns: 0 00:19:03.791 Unrecoverable Media Errors: 0 00:19:03.791 Lifetime Error Log Entries: 0 00:19:03.791 Warning Temperature Time: 0 minutes 00:19:03.791 Critical Temperature Time: 0 minutes 00:19:03.791 00:19:03.791 Number of Queues 00:19:03.791 ================ 00:19:03.791 Number of I/O Submission Queues: 64 00:19:03.791 Number of I/O Completion Queues: 64 00:19:03.791 00:19:03.791 ZNS Specific Controller Data 00:19:03.791 ============================ 00:19:03.791 Zone Append Size Limit: 0 00:19:03.791 00:19:03.791 00:19:03.791 Active Namespaces 00:19:03.791 ================= 00:19:03.791 Namespace ID:1 00:19:03.791 Error Recovery Timeout: Unlimited 00:19:03.791 Command Set Identifier: NVM (00h) 00:19:03.791 Deallocate: Supported 00:19:03.791 Deallocated/Unwritten Error: Supported 00:19:03.791 Deallocated Read Value: All 0x00 00:19:03.791 Deallocate in Write Zeroes: Not Supported 00:19:03.791 Deallocated Guard Field: 0xFFFF 00:19:03.791 Flush: Supported 00:19:03.791 Reservation: Not Supported 00:19:03.791 Namespace Sharing Capabilities: Private 00:19:03.791 Size (in LBAs): 1310720 (5GiB) 00:19:03.791 Capacity (in LBAs): 1310720 (5GiB) 00:19:03.791 Utilization (in LBAs): 1310720 (5GiB) 00:19:03.791 Thin Provisioning: Not Supported 00:19:03.791 Per-NS Atomic Units: No 00:19:03.791 Maximum Single Source Range Length: 128 00:19:03.791 Maximum Copy Length: 128 00:19:03.791 Maximum Source Range Count: 128 00:19:03.791 NGUID/EUI64 Never Reused: No 00:19:03.791 Namespace Write Protected: No 00:19:03.791 Number of LBA Formats: 8 00:19:03.791 Current LBA Format: LBA Format #04 00:19:03.791 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.791 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:03.791 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:03.791 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:03.791 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:03.791 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:03.791 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:03.791 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:03.791 00:19:03.791 NVM Specific Namespace Data 00:19:03.791 =========================== 00:19:03.791 Logical Block Storage Tag Mask: 0 00:19:03.791 Protection Information Capabilities: 00:19:03.791 16b Guard Protection Information Storage Tag Support: No 00:19:03.791 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:03.791 Storage Tag Check Read Support: No 00:19:03.791 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.791 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.791 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.791 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.791 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.791 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.791 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.791 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:03.791 12:21:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:03.791 12:21:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:19:04.050 ===================================================== 00:19:04.050 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:04.050 ===================================================== 00:19:04.050 Controller Capabilities/Features 00:19:04.050 ================================ 00:19:04.050 Vendor ID: 1b36 00:19:04.050 Subsystem Vendor ID: 1af4 00:19:04.050 Serial Number: 12342 00:19:04.050 Model Number: QEMU NVMe Ctrl 00:19:04.050 Firmware Version: 8.0.0 00:19:04.050 Recommended Arb Burst: 6 00:19:04.050 IEEE OUI Identifier: 00 54 52 00:19:04.050 Multi-path I/O 00:19:04.050 May have multiple subsystem ports: No 00:19:04.050 May have multiple controllers: No 00:19:04.050 Associated with SR-IOV VF: No 00:19:04.050 Max Data Transfer Size: 524288 00:19:04.050 Max Number of Namespaces: 256 00:19:04.050 Max Number of I/O Queues: 64 00:19:04.050 NVMe Specification Version (VS): 1.4 00:19:04.050 NVMe Specification Version (Identify): 1.4 00:19:04.050 Maximum Queue Entries: 2048 00:19:04.050 Contiguous Queues Required: Yes 00:19:04.050 Arbitration Mechanisms Supported 00:19:04.050 Weighted Round Robin: Not Supported 00:19:04.050 Vendor Specific: Not Supported 00:19:04.050 Reset Timeout: 7500 ms 00:19:04.050 Doorbell Stride: 4 bytes 00:19:04.050 NVM Subsystem Reset: Not Supported 00:19:04.050 Command Sets Supported 00:19:04.050 NVM Command Set: Supported 00:19:04.050 Boot Partition: Not Supported 00:19:04.050 Memory Page Size Minimum: 4096 bytes 00:19:04.050 Memory Page Size Maximum: 65536 bytes 00:19:04.050 Persistent Memory Region: Not Supported 00:19:04.050 Optional Asynchronous Events Supported 00:19:04.050 Namespace Attribute Notices: Supported 00:19:04.050 Firmware Activation Notices: Not Supported 00:19:04.050 ANA Change Notices: Not Supported 00:19:04.050 PLE Aggregate Log Change Notices: Not Supported 00:19:04.050 LBA Status Info Alert Notices: Not Supported 00:19:04.050 EGE Aggregate Log Change Notices: Not Supported 00:19:04.050 Normal NVM Subsystem Shutdown event: Not Supported 00:19:04.050 Zone Descriptor Change Notices: Not Supported 00:19:04.050 Discovery Log Change Notices: Not Supported 00:19:04.050 Controller Attributes 00:19:04.050 128-bit Host Identifier: Not Supported 00:19:04.050 Non-Operational Permissive Mode: Not Supported 00:19:04.050 NVM Sets: Not Supported 00:19:04.050 Read Recovery Levels: Not Supported 00:19:04.050 Endurance Groups: Not Supported 00:19:04.050 Predictable Latency Mode: Not Supported 00:19:04.050 Traffic Based Keep ALive: Not Supported 00:19:04.050 Namespace Granularity: Not Supported 00:19:04.051 SQ Associations: Not Supported 00:19:04.051 UUID List: Not Supported 00:19:04.051 Multi-Domain Subsystem: Not Supported 00:19:04.051 Fixed Capacity Management: Not Supported 00:19:04.051 Variable Capacity Management: Not Supported 00:19:04.051 Delete Endurance Group: Not Supported 00:19:04.051 Delete NVM Set: Not Supported 00:19:04.051 Extended LBA Formats Supported: Supported 00:19:04.051 Flexible Data Placement Supported: Not Supported 00:19:04.051 00:19:04.051 Controller Memory Buffer Support 00:19:04.051 ================================ 00:19:04.051 Supported: No 00:19:04.051 00:19:04.051 Persistent Memory Region Support 00:19:04.051 ================================ 00:19:04.051 Supported: No 00:19:04.051 00:19:04.051 Admin Command Set Attributes 00:19:04.051 ============================ 00:19:04.051 Security Send/Receive: Not Supported 00:19:04.051 Format NVM: Supported 00:19:04.051 Firmware Activate/Download: Not Supported 00:19:04.051 Namespace Management: Supported 00:19:04.051 Device Self-Test: Not Supported 00:19:04.051 Directives: Supported 00:19:04.051 NVMe-MI: Not Supported 00:19:04.051 Virtualization Management: Not Supported 00:19:04.051 Doorbell Buffer Config: Supported 00:19:04.051 Get LBA Status Capability: Not Supported 00:19:04.051 Command & Feature Lockdown Capability: Not Supported 00:19:04.051 Abort Command Limit: 4 00:19:04.051 Async Event Request Limit: 4 00:19:04.051 Number of Firmware Slots: N/A 00:19:04.051 Firmware Slot 1 Read-Only: N/A 00:19:04.051 Firmware Activation Without Reset: N/A 00:19:04.051 Multiple Update Detection Support: N/A 00:19:04.051 Firmware Update Granularity: No Information Provided 00:19:04.051 Per-Namespace SMART Log: Yes 00:19:04.051 Asymmetric Namespace Access Log Page: Not Supported 00:19:04.051 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:19:04.051 Command Effects Log Page: Supported 00:19:04.051 Get Log Page Extended Data: Supported 00:19:04.051 Telemetry Log Pages: Not Supported 00:19:04.051 Persistent Event Log Pages: Not Supported 00:19:04.051 Supported Log Pages Log Page: May Support 00:19:04.051 Commands Supported & Effects Log Page: Not Supported 00:19:04.051 Feature Identifiers & Effects Log Page:May Support 00:19:04.051 NVMe-MI Commands & Effects Log Page: May Support 00:19:04.051 Data Area 4 for Telemetry Log: Not Supported 00:19:04.051 Error Log Page Entries Supported: 1 00:19:04.051 Keep Alive: Not Supported 00:19:04.051 00:19:04.051 NVM Command Set Attributes 00:19:04.051 ========================== 00:19:04.051 Submission Queue Entry Size 00:19:04.051 Max: 64 00:19:04.051 Min: 64 00:19:04.051 Completion Queue Entry Size 00:19:04.051 Max: 16 00:19:04.051 Min: 16 00:19:04.051 Number of Namespaces: 256 00:19:04.051 Compare Command: Supported 00:19:04.051 Write Uncorrectable Command: Not Supported 00:19:04.051 Dataset Management Command: Supported 00:19:04.051 Write Zeroes Command: Supported 00:19:04.051 Set Features Save Field: Supported 00:19:04.051 Reservations: Not Supported 00:19:04.051 Timestamp: Supported 00:19:04.051 Copy: Supported 00:19:04.051 Volatile Write Cache: Present 00:19:04.051 Atomic Write Unit (Normal): 1 00:19:04.051 Atomic Write Unit (PFail): 1 00:19:04.051 Atomic Compare & Write Unit: 1 00:19:04.051 Fused Compare & Write: Not Supported 00:19:04.051 Scatter-Gather List 00:19:04.051 SGL Command Set: Supported 00:19:04.051 SGL Keyed: Not Supported 00:19:04.051 SGL Bit Bucket Descriptor: Not Supported 00:19:04.051 SGL Metadata Pointer: Not Supported 00:19:04.051 Oversized SGL: Not Supported 00:19:04.051 SGL Metadata Address: Not Supported 00:19:04.051 SGL Offset: Not Supported 00:19:04.051 Transport SGL Data Block: Not Supported 00:19:04.051 Replay Protected Memory Block: Not Supported 00:19:04.051 00:19:04.051 Firmware Slot Information 00:19:04.051 ========================= 00:19:04.051 Active slot: 1 00:19:04.051 Slot 1 Firmware Revision: 1.0 00:19:04.051 00:19:04.051 00:19:04.051 Commands Supported and Effects 00:19:04.051 ============================== 00:19:04.051 Admin Commands 00:19:04.051 -------------- 00:19:04.051 Delete I/O Submission Queue (00h): Supported 00:19:04.051 Create I/O Submission Queue (01h): Supported 00:19:04.051 Get Log Page (02h): Supported 00:19:04.051 Delete I/O Completion Queue (04h): Supported 00:19:04.051 Create I/O Completion Queue (05h): Supported 00:19:04.051 Identify (06h): Supported 00:19:04.051 Abort (08h): Supported 00:19:04.051 Set Features (09h): Supported 00:19:04.051 Get Features (0Ah): Supported 00:19:04.051 Asynchronous Event Request (0Ch): Supported 00:19:04.051 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:04.051 Directive Send (19h): Supported 00:19:04.051 Directive Receive (1Ah): Supported 00:19:04.051 Virtualization Management (1Ch): Supported 00:19:04.051 Doorbell Buffer Config (7Ch): Supported 00:19:04.051 Format NVM (80h): Supported LBA-Change 00:19:04.051 I/O Commands 00:19:04.051 ------------ 00:19:04.051 Flush (00h): Supported LBA-Change 00:19:04.051 Write (01h): Supported LBA-Change 00:19:04.051 Read (02h): Supported 00:19:04.051 Compare (05h): Supported 00:19:04.051 Write Zeroes (08h): Supported LBA-Change 00:19:04.051 Dataset Management (09h): Supported LBA-Change 00:19:04.051 Unknown (0Ch): Supported 00:19:04.051 Unknown (12h): Supported 00:19:04.051 Copy (19h): Supported LBA-Change 00:19:04.051 Unknown (1Dh): Supported LBA-Change 00:19:04.051 00:19:04.051 Error Log 00:19:04.051 ========= 00:19:04.051 00:19:04.051 Arbitration 00:19:04.051 =========== 00:19:04.051 Arbitration Burst: no limit 00:19:04.051 00:19:04.051 Power Management 00:19:04.051 ================ 00:19:04.051 Number of Power States: 1 00:19:04.051 Current Power State: Power State #0 00:19:04.051 Power State #0: 00:19:04.051 Max Power: 25.00 W 00:19:04.051 Non-Operational State: Operational 00:19:04.051 Entry Latency: 16 microseconds 00:19:04.051 Exit Latency: 4 microseconds 00:19:04.051 Relative Read Throughput: 0 00:19:04.051 Relative Read Latency: 0 00:19:04.051 Relative Write Throughput: 0 00:19:04.051 Relative Write Latency: 0 00:19:04.051 Idle Power: Not Reported 00:19:04.051 Active Power: Not Reported 00:19:04.051 Non-Operational Permissive Mode: Not Supported 00:19:04.051 00:19:04.051 Health Information 00:19:04.051 ================== 00:19:04.051 Critical Warnings: 00:19:04.051 Available Spare Space: OK 00:19:04.051 Temperature: OK 00:19:04.051 Device Reliability: OK 00:19:04.051 Read Only: No 00:19:04.051 Volatile Memory Backup: OK 00:19:04.051 Current Temperature: 323 Kelvin (50 Celsius) 00:19:04.051 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:04.051 Available Spare: 0% 00:19:04.051 Available Spare Threshold: 0% 00:19:04.051 Life Percentage Used: 0% 00:19:04.051 Data Units Read: 2462 00:19:04.051 Data Units Written: 2142 00:19:04.051 Host Read Commands: 115217 00:19:04.051 Host Write Commands: 110987 00:19:04.051 Controller Busy Time: 0 minutes 00:19:04.051 Power Cycles: 0 00:19:04.051 Power On Hours: 0 hours 00:19:04.051 Unsafe Shutdowns: 0 00:19:04.051 Unrecoverable Media Errors: 0 00:19:04.051 Lifetime Error Log Entries: 0 00:19:04.051 Warning Temperature Time: 0 minutes 00:19:04.051 Critical Temperature Time: 0 minutes 00:19:04.051 00:19:04.051 Number of Queues 00:19:04.051 ================ 00:19:04.051 Number of I/O Submission Queues: 64 00:19:04.051 Number of I/O Completion Queues: 64 00:19:04.051 00:19:04.051 ZNS Specific Controller Data 00:19:04.051 ============================ 00:19:04.051 Zone Append Size Limit: 0 00:19:04.051 00:19:04.051 00:19:04.051 Active Namespaces 00:19:04.051 ================= 00:19:04.051 Namespace ID:1 00:19:04.051 Error Recovery Timeout: Unlimited 00:19:04.051 Command Set Identifier: NVM (00h) 00:19:04.051 Deallocate: Supported 00:19:04.051 Deallocated/Unwritten Error: Supported 00:19:04.051 Deallocated Read Value: All 0x00 00:19:04.051 Deallocate in Write Zeroes: Not Supported 00:19:04.051 Deallocated Guard Field: 0xFFFF 00:19:04.051 Flush: Supported 00:19:04.051 Reservation: Not Supported 00:19:04.051 Namespace Sharing Capabilities: Private 00:19:04.051 Size (in LBAs): 1048576 (4GiB) 00:19:04.051 Capacity (in LBAs): 1048576 (4GiB) 00:19:04.051 Utilization (in LBAs): 1048576 (4GiB) 00:19:04.051 Thin Provisioning: Not Supported 00:19:04.051 Per-NS Atomic Units: No 00:19:04.051 Maximum Single Source Range Length: 128 00:19:04.051 Maximum Copy Length: 128 00:19:04.051 Maximum Source Range Count: 128 00:19:04.051 NGUID/EUI64 Never Reused: No 00:19:04.051 Namespace Write Protected: No 00:19:04.051 Number of LBA Formats: 8 00:19:04.051 Current LBA Format: LBA Format #04 00:19:04.051 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:04.051 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:04.051 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:04.051 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:04.051 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:04.051 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:04.051 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:04.051 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:04.051 00:19:04.051 NVM Specific Namespace Data 00:19:04.051 =========================== 00:19:04.051 Logical Block Storage Tag Mask: 0 00:19:04.051 Protection Information Capabilities: 00:19:04.051 16b Guard Protection Information Storage Tag Support: No 00:19:04.051 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:04.051 Storage Tag Check Read Support: No 00:19:04.051 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Namespace ID:2 00:19:04.051 Error Recovery Timeout: Unlimited 00:19:04.051 Command Set Identifier: NVM (00h) 00:19:04.051 Deallocate: Supported 00:19:04.051 Deallocated/Unwritten Error: Supported 00:19:04.051 Deallocated Read Value: All 0x00 00:19:04.051 Deallocate in Write Zeroes: Not Supported 00:19:04.051 Deallocated Guard Field: 0xFFFF 00:19:04.051 Flush: Supported 00:19:04.051 Reservation: Not Supported 00:19:04.051 Namespace Sharing Capabilities: Private 00:19:04.051 Size (in LBAs): 1048576 (4GiB) 00:19:04.051 Capacity (in LBAs): 1048576 (4GiB) 00:19:04.051 Utilization (in LBAs): 1048576 (4GiB) 00:19:04.051 Thin Provisioning: Not Supported 00:19:04.051 Per-NS Atomic Units: No 00:19:04.051 Maximum Single Source Range Length: 128 00:19:04.051 Maximum Copy Length: 128 00:19:04.051 Maximum Source Range Count: 128 00:19:04.051 NGUID/EUI64 Never Reused: No 00:19:04.051 Namespace Write Protected: No 00:19:04.051 Number of LBA Formats: 8 00:19:04.051 Current LBA Format: LBA Format #04 00:19:04.051 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:04.051 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:04.051 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:04.051 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:04.051 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:04.051 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:04.051 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:04.051 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:04.051 00:19:04.051 NVM Specific Namespace Data 00:19:04.051 =========================== 00:19:04.051 Logical Block Storage Tag Mask: 0 00:19:04.051 Protection Information Capabilities: 00:19:04.051 16b Guard Protection Information Storage Tag Support: No 00:19:04.051 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:04.051 Storage Tag Check Read Support: No 00:19:04.051 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Namespace ID:3 00:19:04.051 Error Recovery Timeout: Unlimited 00:19:04.051 Command Set Identifier: NVM (00h) 00:19:04.051 Deallocate: Supported 00:19:04.051 Deallocated/Unwritten Error: Supported 00:19:04.051 Deallocated Read Value: All 0x00 00:19:04.051 Deallocate in Write Zeroes: Not Supported 00:19:04.051 Deallocated Guard Field: 0xFFFF 00:19:04.051 Flush: Supported 00:19:04.051 Reservation: Not Supported 00:19:04.051 Namespace Sharing Capabilities: Private 00:19:04.051 Size (in LBAs): 1048576 (4GiB) 00:19:04.051 Capacity (in LBAs): 1048576 (4GiB) 00:19:04.051 Utilization (in LBAs): 1048576 (4GiB) 00:19:04.051 Thin Provisioning: Not Supported 00:19:04.051 Per-NS Atomic Units: No 00:19:04.051 Maximum Single Source Range Length: 128 00:19:04.051 Maximum Copy Length: 128 00:19:04.051 Maximum Source Range Count: 128 00:19:04.051 NGUID/EUI64 Never Reused: No 00:19:04.051 Namespace Write Protected: No 00:19:04.051 Number of LBA Formats: 8 00:19:04.051 Current LBA Format: LBA Format #04 00:19:04.051 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:04.051 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:04.051 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:04.051 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:04.051 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:04.051 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:04.051 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:04.051 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:04.051 00:19:04.051 NVM Specific Namespace Data 00:19:04.051 =========================== 00:19:04.051 Logical Block Storage Tag Mask: 0 00:19:04.051 Protection Information Capabilities: 00:19:04.051 16b Guard Protection Information Storage Tag Support: No 00:19:04.051 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:04.051 Storage Tag Check Read Support: No 00:19:04.051 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.051 12:21:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:04.052 12:21:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:19:04.311 ===================================================== 00:19:04.311 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:04.311 ===================================================== 00:19:04.311 Controller Capabilities/Features 00:19:04.311 ================================ 00:19:04.311 Vendor ID: 1b36 00:19:04.311 Subsystem Vendor ID: 1af4 00:19:04.311 Serial Number: 12343 00:19:04.311 Model Number: QEMU NVMe Ctrl 00:19:04.311 Firmware Version: 8.0.0 00:19:04.311 Recommended Arb Burst: 6 00:19:04.311 IEEE OUI Identifier: 00 54 52 00:19:04.311 Multi-path I/O 00:19:04.311 May have multiple subsystem ports: No 00:19:04.311 May have multiple controllers: Yes 00:19:04.311 Associated with SR-IOV VF: No 00:19:04.311 Max Data Transfer Size: 524288 00:19:04.311 Max Number of Namespaces: 256 00:19:04.311 Max Number of I/O Queues: 64 00:19:04.311 NVMe Specification Version (VS): 1.4 00:19:04.311 NVMe Specification Version (Identify): 1.4 00:19:04.311 Maximum Queue Entries: 2048 00:19:04.311 Contiguous Queues Required: Yes 00:19:04.311 Arbitration Mechanisms Supported 00:19:04.311 Weighted Round Robin: Not Supported 00:19:04.311 Vendor Specific: Not Supported 00:19:04.311 Reset Timeout: 7500 ms 00:19:04.311 Doorbell Stride: 4 bytes 00:19:04.311 NVM Subsystem Reset: Not Supported 00:19:04.311 Command Sets Supported 00:19:04.311 NVM Command Set: Supported 00:19:04.311 Boot Partition: Not Supported 00:19:04.311 Memory Page Size Minimum: 4096 bytes 00:19:04.311 Memory Page Size Maximum: 65536 bytes 00:19:04.311 Persistent Memory Region: Not Supported 00:19:04.311 Optional Asynchronous Events Supported 00:19:04.311 Namespace Attribute Notices: Supported 00:19:04.311 Firmware Activation Notices: Not Supported 00:19:04.311 ANA Change Notices: Not Supported 00:19:04.311 PLE Aggregate Log Change Notices: Not Supported 00:19:04.311 LBA Status Info Alert Notices: Not Supported 00:19:04.311 EGE Aggregate Log Change Notices: Not Supported 00:19:04.311 Normal NVM Subsystem Shutdown event: Not Supported 00:19:04.311 Zone Descriptor Change Notices: Not Supported 00:19:04.311 Discovery Log Change Notices: Not Supported 00:19:04.311 Controller Attributes 00:19:04.311 128-bit Host Identifier: Not Supported 00:19:04.311 Non-Operational Permissive Mode: Not Supported 00:19:04.311 NVM Sets: Not Supported 00:19:04.311 Read Recovery Levels: Not Supported 00:19:04.311 Endurance Groups: Supported 00:19:04.311 Predictable Latency Mode: Not Supported 00:19:04.311 Traffic Based Keep ALive: Not Supported 00:19:04.311 Namespace Granularity: Not Supported 00:19:04.311 SQ Associations: Not Supported 00:19:04.311 UUID List: Not Supported 00:19:04.311 Multi-Domain Subsystem: Not Supported 00:19:04.311 Fixed Capacity Management: Not Supported 00:19:04.311 Variable Capacity Management: Not Supported 00:19:04.311 Delete Endurance Group: Not Supported 00:19:04.311 Delete NVM Set: Not Supported 00:19:04.311 Extended LBA Formats Supported: Supported 00:19:04.311 Flexible Data Placement Supported: Supported 00:19:04.311 00:19:04.311 Controller Memory Buffer Support 00:19:04.311 ================================ 00:19:04.311 Supported: No 00:19:04.311 00:19:04.311 Persistent Memory Region Support 00:19:04.311 ================================ 00:19:04.311 Supported: No 00:19:04.311 00:19:04.311 Admin Command Set Attributes 00:19:04.311 ============================ 00:19:04.311 Security Send/Receive: Not Supported 00:19:04.311 Format NVM: Supported 00:19:04.311 Firmware Activate/Download: Not Supported 00:19:04.311 Namespace Management: Supported 00:19:04.311 Device Self-Test: Not Supported 00:19:04.311 Directives: Supported 00:19:04.311 NVMe-MI: Not Supported 00:19:04.311 Virtualization Management: Not Supported 00:19:04.311 Doorbell Buffer Config: Supported 00:19:04.311 Get LBA Status Capability: Not Supported 00:19:04.311 Command & Feature Lockdown Capability: Not Supported 00:19:04.311 Abort Command Limit: 4 00:19:04.311 Async Event Request Limit: 4 00:19:04.311 Number of Firmware Slots: N/A 00:19:04.311 Firmware Slot 1 Read-Only: N/A 00:19:04.311 Firmware Activation Without Reset: N/A 00:19:04.311 Multiple Update Detection Support: N/A 00:19:04.311 Firmware Update Granularity: No Information Provided 00:19:04.311 Per-Namespace SMART Log: Yes 00:19:04.311 Asymmetric Namespace Access Log Page: Not Supported 00:19:04.311 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:19:04.311 Command Effects Log Page: Supported 00:19:04.311 Get Log Page Extended Data: Supported 00:19:04.311 Telemetry Log Pages: Not Supported 00:19:04.311 Persistent Event Log Pages: Not Supported 00:19:04.311 Supported Log Pages Log Page: May Support 00:19:04.311 Commands Supported & Effects Log Page: Not Supported 00:19:04.311 Feature Identifiers & Effects Log Page:May Support 00:19:04.311 NVMe-MI Commands & Effects Log Page: May Support 00:19:04.311 Data Area 4 for Telemetry Log: Not Supported 00:19:04.311 Error Log Page Entries Supported: 1 00:19:04.311 Keep Alive: Not Supported 00:19:04.311 00:19:04.311 NVM Command Set Attributes 00:19:04.311 ========================== 00:19:04.311 Submission Queue Entry Size 00:19:04.311 Max: 64 00:19:04.311 Min: 64 00:19:04.311 Completion Queue Entry Size 00:19:04.311 Max: 16 00:19:04.311 Min: 16 00:19:04.311 Number of Namespaces: 256 00:19:04.311 Compare Command: Supported 00:19:04.311 Write Uncorrectable Command: Not Supported 00:19:04.311 Dataset Management Command: Supported 00:19:04.311 Write Zeroes Command: Supported 00:19:04.311 Set Features Save Field: Supported 00:19:04.311 Reservations: Not Supported 00:19:04.311 Timestamp: Supported 00:19:04.311 Copy: Supported 00:19:04.311 Volatile Write Cache: Present 00:19:04.311 Atomic Write Unit (Normal): 1 00:19:04.311 Atomic Write Unit (PFail): 1 00:19:04.311 Atomic Compare & Write Unit: 1 00:19:04.311 Fused Compare & Write: Not Supported 00:19:04.311 Scatter-Gather List 00:19:04.311 SGL Command Set: Supported 00:19:04.311 SGL Keyed: Not Supported 00:19:04.311 SGL Bit Bucket Descriptor: Not Supported 00:19:04.311 SGL Metadata Pointer: Not Supported 00:19:04.311 Oversized SGL: Not Supported 00:19:04.311 SGL Metadata Address: Not Supported 00:19:04.311 SGL Offset: Not Supported 00:19:04.311 Transport SGL Data Block: Not Supported 00:19:04.311 Replay Protected Memory Block: Not Supported 00:19:04.311 00:19:04.311 Firmware Slot Information 00:19:04.311 ========================= 00:19:04.311 Active slot: 1 00:19:04.311 Slot 1 Firmware Revision: 1.0 00:19:04.311 00:19:04.311 00:19:04.311 Commands Supported and Effects 00:19:04.311 ============================== 00:19:04.311 Admin Commands 00:19:04.311 -------------- 00:19:04.311 Delete I/O Submission Queue (00h): Supported 00:19:04.311 Create I/O Submission Queue (01h): Supported 00:19:04.311 Get Log Page (02h): Supported 00:19:04.311 Delete I/O Completion Queue (04h): Supported 00:19:04.311 Create I/O Completion Queue (05h): Supported 00:19:04.311 Identify (06h): Supported 00:19:04.311 Abort (08h): Supported 00:19:04.311 Set Features (09h): Supported 00:19:04.311 Get Features (0Ah): Supported 00:19:04.311 Asynchronous Event Request (0Ch): Supported 00:19:04.311 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:04.311 Directive Send (19h): Supported 00:19:04.311 Directive Receive (1Ah): Supported 00:19:04.311 Virtualization Management (1Ch): Supported 00:19:04.311 Doorbell Buffer Config (7Ch): Supported 00:19:04.311 Format NVM (80h): Supported LBA-Change 00:19:04.311 I/O Commands 00:19:04.311 ------------ 00:19:04.311 Flush (00h): Supported LBA-Change 00:19:04.311 Write (01h): Supported LBA-Change 00:19:04.311 Read (02h): Supported 00:19:04.311 Compare (05h): Supported 00:19:04.311 Write Zeroes (08h): Supported LBA-Change 00:19:04.311 Dataset Management (09h): Supported LBA-Change 00:19:04.311 Unknown (0Ch): Supported 00:19:04.311 Unknown (12h): Supported 00:19:04.311 Copy (19h): Supported LBA-Change 00:19:04.311 Unknown (1Dh): Supported LBA-Change 00:19:04.311 00:19:04.311 Error Log 00:19:04.311 ========= 00:19:04.311 00:19:04.311 Arbitration 00:19:04.311 =========== 00:19:04.311 Arbitration Burst: no limit 00:19:04.311 00:19:04.311 Power Management 00:19:04.311 ================ 00:19:04.311 Number of Power States: 1 00:19:04.311 Current Power State: Power State #0 00:19:04.311 Power State #0: 00:19:04.311 Max Power: 25.00 W 00:19:04.311 Non-Operational State: Operational 00:19:04.311 Entry Latency: 16 microseconds 00:19:04.311 Exit Latency: 4 microseconds 00:19:04.311 Relative Read Throughput: 0 00:19:04.311 Relative Read Latency: 0 00:19:04.311 Relative Write Throughput: 0 00:19:04.311 Relative Write Latency: 0 00:19:04.311 Idle Power: Not Reported 00:19:04.311 Active Power: Not Reported 00:19:04.311 Non-Operational Permissive Mode: Not Supported 00:19:04.311 00:19:04.311 Health Information 00:19:04.311 ================== 00:19:04.311 Critical Warnings: 00:19:04.311 Available Spare Space: OK 00:19:04.311 Temperature: OK 00:19:04.311 Device Reliability: OK 00:19:04.311 Read Only: No 00:19:04.311 Volatile Memory Backup: OK 00:19:04.311 Current Temperature: 323 Kelvin (50 Celsius) 00:19:04.312 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:04.312 Available Spare: 0% 00:19:04.312 Available Spare Threshold: 0% 00:19:04.312 Life Percentage Used: 0% 00:19:04.312 Data Units Read: 873 00:19:04.312 Data Units Written: 767 00:19:04.312 Host Read Commands: 38895 00:19:04.312 Host Write Commands: 37485 00:19:04.312 Controller Busy Time: 0 minutes 00:19:04.312 Power Cycles: 0 00:19:04.312 Power On Hours: 0 hours 00:19:04.312 Unsafe Shutdowns: 0 00:19:04.312 Unrecoverable Media Errors: 0 00:19:04.312 Lifetime Error Log Entries: 0 00:19:04.312 Warning Temperature Time: 0 minutes 00:19:04.312 Critical Temperature Time: 0 minutes 00:19:04.312 00:19:04.312 Number of Queues 00:19:04.312 ================ 00:19:04.312 Number of I/O Submission Queues: 64 00:19:04.312 Number of I/O Completion Queues: 64 00:19:04.312 00:19:04.312 ZNS Specific Controller Data 00:19:04.312 ============================ 00:19:04.312 Zone Append Size Limit: 0 00:19:04.312 00:19:04.312 00:19:04.312 Active Namespaces 00:19:04.312 ================= 00:19:04.312 Namespace ID:1 00:19:04.312 Error Recovery Timeout: Unlimited 00:19:04.312 Command Set Identifier: NVM (00h) 00:19:04.312 Deallocate: Supported 00:19:04.312 Deallocated/Unwritten Error: Supported 00:19:04.312 Deallocated Read Value: All 0x00 00:19:04.312 Deallocate in Write Zeroes: Not Supported 00:19:04.312 Deallocated Guard Field: 0xFFFF 00:19:04.312 Flush: Supported 00:19:04.312 Reservation: Not Supported 00:19:04.312 Namespace Sharing Capabilities: Multiple Controllers 00:19:04.312 Size (in LBAs): 262144 (1GiB) 00:19:04.312 Capacity (in LBAs): 262144 (1GiB) 00:19:04.312 Utilization (in LBAs): 262144 (1GiB) 00:19:04.312 Thin Provisioning: Not Supported 00:19:04.312 Per-NS Atomic Units: No 00:19:04.312 Maximum Single Source Range Length: 128 00:19:04.312 Maximum Copy Length: 128 00:19:04.312 Maximum Source Range Count: 128 00:19:04.312 NGUID/EUI64 Never Reused: No 00:19:04.312 Namespace Write Protected: No 00:19:04.312 Endurance group ID: 1 00:19:04.312 Number of LBA Formats: 8 00:19:04.312 Current LBA Format: LBA Format #04 00:19:04.312 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:04.312 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:04.312 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:04.312 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:04.312 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:04.312 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:04.312 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:04.312 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:04.312 00:19:04.312 Get Feature FDP: 00:19:04.312 ================ 00:19:04.312 Enabled: Yes 00:19:04.312 FDP configuration index: 0 00:19:04.312 00:19:04.312 FDP configurations log page 00:19:04.312 =========================== 00:19:04.312 Number of FDP configurations: 1 00:19:04.312 Version: 0 00:19:04.312 Size: 112 00:19:04.312 FDP Configuration Descriptor: 0 00:19:04.312 Descriptor Size: 96 00:19:04.312 Reclaim Group Identifier format: 2 00:19:04.312 FDP Volatile Write Cache: Not Present 00:19:04.312 FDP Configuration: Valid 00:19:04.312 Vendor Specific Size: 0 00:19:04.312 Number of Reclaim Groups: 2 00:19:04.312 Number of Recalim Unit Handles: 8 00:19:04.312 Max Placement Identifiers: 128 00:19:04.312 Number of Namespaces Suppprted: 256 00:19:04.312 Reclaim unit Nominal Size: 6000000 bytes 00:19:04.312 Estimated Reclaim Unit Time Limit: Not Reported 00:19:04.312 RUH Desc #000: RUH Type: Initially Isolated 00:19:04.312 RUH Desc #001: RUH Type: Initially Isolated 00:19:04.312 RUH Desc #002: RUH Type: Initially Isolated 00:19:04.312 RUH Desc #003: RUH Type: Initially Isolated 00:19:04.312 RUH Desc #004: RUH Type: Initially Isolated 00:19:04.312 RUH Desc #005: RUH Type: Initially Isolated 00:19:04.312 RUH Desc #006: RUH Type: Initially Isolated 00:19:04.312 RUH Desc #007: RUH Type: Initially Isolated 00:19:04.312 00:19:04.312 FDP reclaim unit handle usage log page 00:19:04.312 ====================================== 00:19:04.312 Number of Reclaim Unit Handles: 8 00:19:04.312 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:19:04.312 RUH Usage Desc #001: RUH Attributes: Unused 00:19:04.312 RUH Usage Desc #002: RUH Attributes: Unused 00:19:04.312 RUH Usage Desc #003: RUH Attributes: Unused 00:19:04.312 RUH Usage Desc #004: RUH Attributes: Unused 00:19:04.312 RUH Usage Desc #005: RUH Attributes: Unused 00:19:04.312 RUH Usage Desc #006: RUH Attributes: Unused 00:19:04.312 RUH Usage Desc #007: RUH Attributes: Unused 00:19:04.312 00:19:04.312 FDP statistics log page 00:19:04.312 ======================= 00:19:04.312 Host bytes with metadata written: 501915648 00:19:04.312 Media bytes with metadata written: 501968896 00:19:04.312 Media bytes erased: 0 00:19:04.312 00:19:04.312 FDP events log page 00:19:04.312 =================== 00:19:04.312 Number of FDP events: 0 00:19:04.312 00:19:04.312 NVM Specific Namespace Data 00:19:04.312 =========================== 00:19:04.312 Logical Block Storage Tag Mask: 0 00:19:04.312 Protection Information Capabilities: 00:19:04.312 16b Guard Protection Information Storage Tag Support: No 00:19:04.312 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:04.312 Storage Tag Check Read Support: No 00:19:04.312 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.312 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.312 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.312 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.312 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.312 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.312 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.312 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:04.312 00:19:04.312 real 0m1.656s 00:19:04.312 user 0m0.611s 00:19:04.312 sys 0m0.834s 00:19:04.312 12:21:13 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:04.312 12:21:13 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:19:04.312 ************************************ 00:19:04.312 END TEST nvme_identify 00:19:04.312 ************************************ 00:19:04.570 12:21:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:04.570 12:21:13 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:19:04.570 12:21:13 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:04.570 12:21:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:04.570 12:21:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:04.571 ************************************ 00:19:04.571 START TEST nvme_perf 00:19:04.571 ************************************ 00:19:04.571 12:21:13 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:19:04.571 12:21:13 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:19:05.949 Initializing NVMe Controllers 00:19:05.949 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:05.949 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:05.949 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:05.949 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:05.949 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:05.949 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:19:05.949 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:19:05.949 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:19:05.949 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:19:05.949 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:19:05.949 Initialization complete. Launching workers. 00:19:05.949 ======================================================== 00:19:05.949 Latency(us) 00:19:05.949 Device Information : IOPS MiB/s Average min max 00:19:05.949 PCIE (0000:00:10.0) NSID 1 from core 0: 13718.76 160.77 9346.26 7776.64 55799.82 00:19:05.949 PCIE (0000:00:11.0) NSID 1 from core 0: 13718.76 160.77 9328.27 7866.84 53616.02 00:19:05.949 PCIE (0000:00:13.0) NSID 1 from core 0: 13718.76 160.77 9309.32 7901.00 52389.70 00:19:05.949 PCIE (0000:00:12.0) NSID 1 from core 0: 13718.76 160.77 9290.73 7869.74 50282.31 00:19:05.949 PCIE (0000:00:12.0) NSID 2 from core 0: 13718.76 160.77 9271.59 7907.84 48044.66 00:19:05.949 PCIE (0000:00:12.0) NSID 3 from core 0: 13782.57 161.51 9209.61 7882.16 40017.55 00:19:05.949 ======================================================== 00:19:05.949 Total : 82376.38 965.35 9292.57 7776.64 55799.82 00:19:05.949 00:19:05.949 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:05.949 ================================================================================= 00:19:05.949 1.00000% : 8053.822us 00:19:05.949 10.00000% : 8317.018us 00:19:05.949 25.00000% : 8580.215us 00:19:05.949 50.00000% : 8843.412us 00:19:05.949 75.00000% : 9159.248us 00:19:05.949 90.00000% : 9527.724us 00:19:05.949 95.00000% : 10422.593us 00:19:05.949 98.00000% : 12738.724us 00:19:05.949 99.00000% : 16212.922us 00:19:05.949 99.50000% : 47585.979us 00:19:05.949 99.90000% : 55587.161us 00:19:05.949 99.99000% : 56008.276us 00:19:05.949 99.99900% : 56008.276us 00:19:05.949 99.99990% : 56008.276us 00:19:05.949 99.99999% : 56008.276us 00:19:05.949 00:19:05.949 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:05.949 ================================================================================= 00:19:05.949 1.00000% : 8106.461us 00:19:05.949 10.00000% : 8369.658us 00:19:05.949 25.00000% : 8580.215us 00:19:05.949 50.00000% : 8843.412us 00:19:05.949 75.00000% : 9159.248us 00:19:05.949 90.00000% : 9475.084us 00:19:05.949 95.00000% : 10317.314us 00:19:05.949 98.00000% : 12580.806us 00:19:05.949 99.00000% : 16739.316us 00:19:05.949 99.50000% : 45690.962us 00:19:05.949 99.90000% : 53271.030us 00:19:05.949 99.99000% : 53692.145us 00:19:05.949 99.99900% : 53692.145us 00:19:05.949 99.99990% : 53692.145us 00:19:05.949 99.99999% : 53692.145us 00:19:05.949 00:19:05.949 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:05.949 ================================================================================= 00:19:05.949 1.00000% : 8106.461us 00:19:05.949 10.00000% : 8369.658us 00:19:05.949 25.00000% : 8580.215us 00:19:05.949 50.00000% : 8843.412us 00:19:05.949 75.00000% : 9159.248us 00:19:05.949 90.00000% : 9475.084us 00:19:05.949 95.00000% : 10264.675us 00:19:05.949 98.00000% : 12686.085us 00:19:05.949 99.00000% : 16318.201us 00:19:05.949 99.50000% : 44638.175us 00:19:05.949 99.90000% : 52007.685us 00:19:05.949 99.99000% : 52428.800us 00:19:05.949 99.99900% : 52428.800us 00:19:05.949 99.99990% : 52428.800us 00:19:05.949 99.99999% : 52428.800us 00:19:05.949 00:19:05.950 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:05.950 ================================================================================= 00:19:05.950 1.00000% : 8106.461us 00:19:05.950 10.00000% : 8369.658us 00:19:05.950 25.00000% : 8580.215us 00:19:05.950 50.00000% : 8843.412us 00:19:05.950 75.00000% : 9159.248us 00:19:05.950 90.00000% : 9475.084us 00:19:05.950 95.00000% : 10369.953us 00:19:05.950 98.00000% : 12791.364us 00:19:05.950 99.00000% : 15791.807us 00:19:05.950 99.50000% : 42532.601us 00:19:05.950 99.90000% : 50112.668us 00:19:05.950 99.99000% : 50323.226us 00:19:05.950 99.99900% : 50323.226us 00:19:05.950 99.99990% : 50323.226us 00:19:05.950 99.99999% : 50323.226us 00:19:05.950 00:19:05.950 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:05.950 ================================================================================= 00:19:05.950 1.00000% : 8106.461us 00:19:05.950 10.00000% : 8369.658us 00:19:05.950 25.00000% : 8580.215us 00:19:05.950 50.00000% : 8843.412us 00:19:05.950 75.00000% : 9159.248us 00:19:05.950 90.00000% : 9527.724us 00:19:05.950 95.00000% : 10422.593us 00:19:05.950 98.00000% : 12264.970us 00:19:05.950 99.00000% : 15370.692us 00:19:05.950 99.50000% : 40637.584us 00:19:05.950 99.90000% : 47796.537us 00:19:05.950 99.99000% : 48217.651us 00:19:05.950 99.99900% : 48217.651us 00:19:05.950 99.99990% : 48217.651us 00:19:05.950 99.99999% : 48217.651us 00:19:05.950 00:19:05.950 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:05.950 ================================================================================= 00:19:05.950 1.00000% : 8106.461us 00:19:05.950 10.00000% : 8369.658us 00:19:05.950 25.00000% : 8580.215us 00:19:05.950 50.00000% : 8843.412us 00:19:05.950 75.00000% : 9159.248us 00:19:05.950 90.00000% : 9527.724us 00:19:05.950 95.00000% : 10527.871us 00:19:05.950 98.00000% : 12528.167us 00:19:05.950 99.00000% : 15370.692us 00:19:05.950 99.50000% : 32004.729us 00:19:05.950 99.90000% : 39795.354us 00:19:05.950 99.99000% : 40005.912us 00:19:05.950 99.99900% : 40216.469us 00:19:05.950 99.99990% : 40216.469us 00:19:05.950 99.99999% : 40216.469us 00:19:05.950 00:19:05.950 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:05.950 ============================================================================== 00:19:05.950 Range in us Cumulative IO count 00:19:05.950 7737.986 - 7790.625: 0.0145% ( 2) 00:19:05.950 7790.625 - 7843.264: 0.0581% ( 6) 00:19:05.950 7843.264 - 7895.904: 0.1599% ( 14) 00:19:05.950 7895.904 - 7948.543: 0.3706% ( 29) 00:19:05.950 7948.543 - 8001.182: 0.7776% ( 56) 00:19:05.950 8001.182 - 8053.822: 1.5116% ( 101) 00:19:05.950 8053.822 - 8106.461: 2.5436% ( 142) 00:19:05.950 8106.461 - 8159.100: 3.9317% ( 191) 00:19:05.950 8159.100 - 8211.740: 5.7994% ( 257) 00:19:05.950 8211.740 - 8264.379: 7.7834% ( 273) 00:19:05.950 8264.379 - 8317.018: 10.3343% ( 351) 00:19:05.950 8317.018 - 8369.658: 13.3285% ( 412) 00:19:05.950 8369.658 - 8422.297: 16.6788% ( 461) 00:19:05.950 8422.297 - 8474.937: 20.1744% ( 481) 00:19:05.950 8474.937 - 8527.576: 24.3750% ( 578) 00:19:05.950 8527.576 - 8580.215: 28.4884% ( 566) 00:19:05.950 8580.215 - 8632.855: 33.0669% ( 630) 00:19:05.950 8632.855 - 8685.494: 37.6017% ( 624) 00:19:05.950 8685.494 - 8738.133: 42.1221% ( 622) 00:19:05.950 8738.133 - 8790.773: 46.7805% ( 641) 00:19:05.950 8790.773 - 8843.412: 51.5044% ( 650) 00:19:05.950 8843.412 - 8896.051: 55.9738% ( 615) 00:19:05.950 8896.051 - 8948.691: 60.5305% ( 627) 00:19:05.950 8948.691 - 9001.330: 64.8837% ( 599) 00:19:05.950 9001.330 - 9053.969: 68.8227% ( 542) 00:19:05.950 9053.969 - 9106.609: 72.5436% ( 512) 00:19:05.950 9106.609 - 9159.248: 76.0610% ( 484) 00:19:05.950 9159.248 - 9211.888: 79.1061% ( 419) 00:19:05.950 9211.888 - 9264.527: 81.9041% ( 385) 00:19:05.950 9264.527 - 9317.166: 84.3605% ( 338) 00:19:05.950 9317.166 - 9369.806: 86.2718% ( 263) 00:19:05.950 9369.806 - 9422.445: 87.8779% ( 221) 00:19:05.950 9422.445 - 9475.084: 89.3096% ( 197) 00:19:05.950 9475.084 - 9527.724: 90.3779% ( 147) 00:19:05.950 9527.724 - 9580.363: 91.0610% ( 94) 00:19:05.950 9580.363 - 9633.002: 91.5988% ( 74) 00:19:05.950 9633.002 - 9685.642: 91.9985% ( 55) 00:19:05.950 9685.642 - 9738.281: 92.3110% ( 43) 00:19:05.950 9738.281 - 9790.920: 92.6235% ( 43) 00:19:05.950 9790.920 - 9843.560: 92.8488% ( 31) 00:19:05.950 9843.560 - 9896.199: 93.0887% ( 33) 00:19:05.950 9896.199 - 9948.839: 93.3285% ( 33) 00:19:05.950 9948.839 - 10001.478: 93.5320% ( 28) 00:19:05.950 10001.478 - 10054.117: 93.7282% ( 27) 00:19:05.950 10054.117 - 10106.757: 93.9390% ( 29) 00:19:05.950 10106.757 - 10159.396: 94.1497% ( 29) 00:19:05.950 10159.396 - 10212.035: 94.3169% ( 23) 00:19:05.950 10212.035 - 10264.675: 94.4840% ( 23) 00:19:05.950 10264.675 - 10317.314: 94.6584% ( 24) 00:19:05.950 10317.314 - 10369.953: 94.8183% ( 22) 00:19:05.950 10369.953 - 10422.593: 95.0145% ( 27) 00:19:05.950 10422.593 - 10475.232: 95.1817% ( 23) 00:19:05.950 10475.232 - 10527.871: 95.3634% ( 25) 00:19:05.950 10527.871 - 10580.511: 95.5015% ( 19) 00:19:05.950 10580.511 - 10633.150: 95.6250% ( 17) 00:19:05.950 10633.150 - 10685.790: 95.7195% ( 13) 00:19:05.950 10685.790 - 10738.429: 95.7776% ( 8) 00:19:05.950 10738.429 - 10791.068: 95.8576% ( 11) 00:19:05.950 10791.068 - 10843.708: 95.9157% ( 8) 00:19:05.950 10843.708 - 10896.347: 96.0029% ( 12) 00:19:05.950 10896.347 - 10948.986: 96.1410% ( 19) 00:19:05.950 10948.986 - 11001.626: 96.2573% ( 16) 00:19:05.950 11001.626 - 11054.265: 96.3227% ( 9) 00:19:05.950 11054.265 - 11106.904: 96.3953% ( 10) 00:19:05.950 11106.904 - 11159.544: 96.4680% ( 10) 00:19:05.950 11159.544 - 11212.183: 96.5334% ( 9) 00:19:05.950 11212.183 - 11264.822: 96.5988% ( 9) 00:19:05.950 11264.822 - 11317.462: 96.6788% ( 11) 00:19:05.950 11317.462 - 11370.101: 96.7587% ( 11) 00:19:05.950 11370.101 - 11422.741: 96.8241% ( 9) 00:19:05.950 11422.741 - 11475.380: 96.8895% ( 9) 00:19:05.950 11475.380 - 11528.019: 96.9549% ( 9) 00:19:05.950 11528.019 - 11580.659: 97.0203% ( 9) 00:19:05.950 11580.659 - 11633.298: 97.0785% ( 8) 00:19:05.950 11633.298 - 11685.937: 97.1512% ( 10) 00:19:05.950 11685.937 - 11738.577: 97.2311% ( 11) 00:19:05.950 11738.577 - 11791.216: 97.3110% ( 11) 00:19:05.950 11791.216 - 11843.855: 97.3765% ( 9) 00:19:05.950 11843.855 - 11896.495: 97.4128% ( 5) 00:19:05.950 11896.495 - 11949.134: 97.4709% ( 8) 00:19:05.950 11949.134 - 12001.773: 97.5000% ( 4) 00:19:05.950 12001.773 - 12054.413: 97.5509% ( 7) 00:19:05.950 12054.413 - 12107.052: 97.5945% ( 6) 00:19:05.950 12107.052 - 12159.692: 97.6308% ( 5) 00:19:05.950 12159.692 - 12212.331: 97.6599% ( 4) 00:19:05.950 12212.331 - 12264.970: 97.6890% ( 4) 00:19:05.950 12264.970 - 12317.610: 97.7180% ( 4) 00:19:05.950 12317.610 - 12370.249: 97.7544% ( 5) 00:19:05.950 12370.249 - 12422.888: 97.7907% ( 5) 00:19:05.950 12422.888 - 12475.528: 97.8270% ( 5) 00:19:05.950 12475.528 - 12528.167: 97.8561% ( 4) 00:19:05.950 12528.167 - 12580.806: 97.9070% ( 7) 00:19:05.950 12580.806 - 12633.446: 97.9360% ( 4) 00:19:05.950 12633.446 - 12686.085: 97.9869% ( 7) 00:19:05.950 12686.085 - 12738.724: 98.0378% ( 7) 00:19:05.950 12738.724 - 12791.364: 98.0741% ( 5) 00:19:05.950 12791.364 - 12844.003: 98.1105% ( 5) 00:19:05.950 12844.003 - 12896.643: 98.1395% ( 4) 00:19:05.950 12896.643 - 12949.282: 98.1831% ( 6) 00:19:05.950 12949.282 - 13001.921: 98.2122% ( 4) 00:19:05.950 13001.921 - 13054.561: 98.2413% ( 4) 00:19:05.950 13054.561 - 13107.200: 98.2558% ( 2) 00:19:05.950 13107.200 - 13159.839: 98.2631% ( 1) 00:19:05.950 13159.839 - 13212.479: 98.2776% ( 2) 00:19:05.950 13212.479 - 13265.118: 98.2849% ( 1) 00:19:05.950 13265.118 - 13317.757: 98.2922% ( 1) 00:19:05.950 13317.757 - 13370.397: 98.3067% ( 2) 00:19:05.950 13370.397 - 13423.036: 98.3140% ( 1) 00:19:05.950 13423.036 - 13475.676: 98.3212% ( 1) 00:19:05.950 13475.676 - 13580.954: 98.3430% ( 3) 00:19:05.950 13580.954 - 13686.233: 98.3648% ( 3) 00:19:05.950 13686.233 - 13791.512: 98.3866% ( 3) 00:19:05.950 13791.512 - 13896.790: 98.4084% ( 3) 00:19:05.950 13896.790 - 14002.069: 98.4302% ( 3) 00:19:05.950 14002.069 - 14107.348: 98.4520% ( 3) 00:19:05.950 14107.348 - 14212.627: 98.4738% ( 3) 00:19:05.950 14212.627 - 14317.905: 98.5029% ( 4) 00:19:05.950 14317.905 - 14423.184: 98.5174% ( 2) 00:19:05.950 14423.184 - 14528.463: 98.5538% ( 5) 00:19:05.950 14528.463 - 14633.741: 98.6119% ( 8) 00:19:05.950 14633.741 - 14739.020: 98.6628% ( 7) 00:19:05.950 14739.020 - 14844.299: 98.6991% ( 5) 00:19:05.950 14844.299 - 14949.578: 98.7209% ( 3) 00:19:05.950 14949.578 - 15054.856: 98.7500% ( 4) 00:19:05.950 15054.856 - 15160.135: 98.7718% ( 3) 00:19:05.950 15160.135 - 15265.414: 98.8081% ( 5) 00:19:05.950 15265.414 - 15370.692: 98.8299% ( 3) 00:19:05.950 15370.692 - 15475.971: 98.8590% ( 4) 00:19:05.950 15475.971 - 15581.250: 98.8735% ( 2) 00:19:05.950 15581.250 - 15686.529: 98.9026% ( 4) 00:19:05.950 15686.529 - 15791.807: 98.9172% ( 2) 00:19:05.950 15791.807 - 15897.086: 98.9390% ( 3) 00:19:05.950 15897.086 - 16002.365: 98.9608% ( 3) 00:19:05.950 16002.365 - 16107.643: 98.9898% ( 4) 00:19:05.950 16107.643 - 16212.922: 99.0189% ( 4) 00:19:05.950 16212.922 - 16318.201: 99.0480% ( 4) 00:19:05.950 16318.201 - 16423.480: 99.0698% ( 3) 00:19:05.950 45480.405 - 45690.962: 99.1061% ( 5) 00:19:05.950 45690.962 - 45901.520: 99.1570% ( 7) 00:19:05.950 45901.520 - 46112.077: 99.2078% ( 7) 00:19:05.950 46112.077 - 46322.635: 99.2587% ( 7) 00:19:05.950 46322.635 - 46533.192: 99.3023% ( 6) 00:19:05.950 46533.192 - 46743.749: 99.3532% ( 7) 00:19:05.950 46743.749 - 46954.307: 99.4041% ( 7) 00:19:05.950 46954.307 - 47164.864: 99.4477% ( 6) 00:19:05.950 47164.864 - 47375.422: 99.4985% ( 7) 00:19:05.950 47375.422 - 47585.979: 99.5349% ( 5) 00:19:05.950 53692.145 - 53902.702: 99.5422% ( 1) 00:19:05.950 53902.702 - 54323.817: 99.6512% ( 15) 00:19:05.950 54323.817 - 54744.932: 99.7384% ( 12) 00:19:05.950 54744.932 - 55166.047: 99.8474% ( 15) 00:19:05.950 55166.047 - 55587.161: 99.9491% ( 14) 00:19:05.950 55587.161 - 56008.276: 100.0000% ( 7) 00:19:05.950 00:19:05.950 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:05.951 ============================================================================== 00:19:05.951 Range in us Cumulative IO count 00:19:05.951 7843.264 - 7895.904: 0.0436% ( 6) 00:19:05.951 7895.904 - 7948.543: 0.1235% ( 11) 00:19:05.951 7948.543 - 8001.182: 0.2980% ( 24) 00:19:05.951 8001.182 - 8053.822: 0.7631% ( 64) 00:19:05.951 8053.822 - 8106.461: 1.4462% ( 94) 00:19:05.951 8106.461 - 8159.100: 2.5872% ( 157) 00:19:05.951 8159.100 - 8211.740: 3.9462% ( 187) 00:19:05.951 8211.740 - 8264.379: 5.6541% ( 235) 00:19:05.951 8264.379 - 8317.018: 8.0596% ( 331) 00:19:05.951 8317.018 - 8369.658: 10.5669% ( 345) 00:19:05.951 8369.658 - 8422.297: 13.7573% ( 439) 00:19:05.951 8422.297 - 8474.937: 17.5436% ( 521) 00:19:05.951 8474.937 - 8527.576: 21.6642% ( 567) 00:19:05.951 8527.576 - 8580.215: 26.2209% ( 627) 00:19:05.951 8580.215 - 8632.855: 31.0538% ( 665) 00:19:05.951 8632.855 - 8685.494: 36.1991% ( 708) 00:19:05.951 8685.494 - 8738.133: 41.5552% ( 737) 00:19:05.951 8738.133 - 8790.773: 46.9041% ( 736) 00:19:05.951 8790.773 - 8843.412: 52.2820% ( 740) 00:19:05.951 8843.412 - 8896.051: 57.3910% ( 703) 00:19:05.951 8896.051 - 8948.691: 62.2602% ( 670) 00:19:05.951 8948.691 - 9001.330: 66.7660% ( 620) 00:19:05.951 9001.330 - 9053.969: 71.0538% ( 590) 00:19:05.951 9053.969 - 9106.609: 74.8692% ( 525) 00:19:05.951 9106.609 - 9159.248: 78.3140% ( 474) 00:19:05.951 9159.248 - 9211.888: 81.4680% ( 434) 00:19:05.951 9211.888 - 9264.527: 84.0770% ( 359) 00:19:05.951 9264.527 - 9317.166: 86.2645% ( 301) 00:19:05.951 9317.166 - 9369.806: 87.8706% ( 221) 00:19:05.951 9369.806 - 9422.445: 89.1933% ( 182) 00:19:05.951 9422.445 - 9475.084: 90.1453% ( 131) 00:19:05.951 9475.084 - 9527.724: 90.8430% ( 96) 00:19:05.951 9527.724 - 9580.363: 91.2936% ( 62) 00:19:05.951 9580.363 - 9633.002: 91.6061% ( 43) 00:19:05.951 9633.002 - 9685.642: 91.9477% ( 47) 00:19:05.951 9685.642 - 9738.281: 92.2892% ( 47) 00:19:05.951 9738.281 - 9790.920: 92.6308% ( 47) 00:19:05.951 9790.920 - 9843.560: 92.9433% ( 43) 00:19:05.951 9843.560 - 9896.199: 93.2267% ( 39) 00:19:05.951 9896.199 - 9948.839: 93.4448% ( 30) 00:19:05.951 9948.839 - 10001.478: 93.6555% ( 29) 00:19:05.951 10001.478 - 10054.117: 93.8590% ( 28) 00:19:05.951 10054.117 - 10106.757: 94.1352% ( 38) 00:19:05.951 10106.757 - 10159.396: 94.4331% ( 41) 00:19:05.951 10159.396 - 10212.035: 94.6512% ( 30) 00:19:05.951 10212.035 - 10264.675: 94.8474% ( 27) 00:19:05.951 10264.675 - 10317.314: 95.0363% ( 26) 00:19:05.951 10317.314 - 10369.953: 95.2035% ( 23) 00:19:05.951 10369.953 - 10422.593: 95.3779% ( 24) 00:19:05.951 10422.593 - 10475.232: 95.5305% ( 21) 00:19:05.951 10475.232 - 10527.871: 95.6468% ( 16) 00:19:05.951 10527.871 - 10580.511: 95.7485% ( 14) 00:19:05.951 10580.511 - 10633.150: 95.7994% ( 7) 00:19:05.951 10633.150 - 10685.790: 95.8648% ( 9) 00:19:05.951 10685.790 - 10738.429: 95.9230% ( 8) 00:19:05.951 10738.429 - 10791.068: 95.9884% ( 9) 00:19:05.951 10791.068 - 10843.708: 96.0465% ( 8) 00:19:05.951 10843.708 - 10896.347: 96.1047% ( 8) 00:19:05.951 10896.347 - 10948.986: 96.1773% ( 10) 00:19:05.951 10948.986 - 11001.626: 96.2500% ( 10) 00:19:05.951 11001.626 - 11054.265: 96.3663% ( 16) 00:19:05.951 11054.265 - 11106.904: 96.4535% ( 12) 00:19:05.951 11106.904 - 11159.544: 96.5189% ( 9) 00:19:05.951 11159.544 - 11212.183: 96.5770% ( 8) 00:19:05.951 11212.183 - 11264.822: 96.6279% ( 7) 00:19:05.951 11264.822 - 11317.462: 96.6788% ( 7) 00:19:05.951 11317.462 - 11370.101: 96.7369% ( 8) 00:19:05.951 11370.101 - 11422.741: 96.7878% ( 7) 00:19:05.951 11422.741 - 11475.380: 96.8459% ( 8) 00:19:05.951 11475.380 - 11528.019: 96.9259% ( 11) 00:19:05.951 11528.019 - 11580.659: 97.0058% ( 11) 00:19:05.951 11580.659 - 11633.298: 97.0858% ( 11) 00:19:05.951 11633.298 - 11685.937: 97.1730% ( 12) 00:19:05.951 11685.937 - 11738.577: 97.2456% ( 10) 00:19:05.951 11738.577 - 11791.216: 97.3256% ( 11) 00:19:05.951 11791.216 - 11843.855: 97.3837% ( 8) 00:19:05.951 11843.855 - 11896.495: 97.4564% ( 10) 00:19:05.951 11896.495 - 11949.134: 97.5000% ( 6) 00:19:05.951 11949.134 - 12001.773: 97.5363% ( 5) 00:19:05.951 12001.773 - 12054.413: 97.5727% ( 5) 00:19:05.951 12054.413 - 12107.052: 97.6163% ( 6) 00:19:05.951 12107.052 - 12159.692: 97.6599% ( 6) 00:19:05.951 12159.692 - 12212.331: 97.7326% ( 10) 00:19:05.951 12212.331 - 12264.970: 97.7980% ( 9) 00:19:05.951 12264.970 - 12317.610: 97.8416% ( 6) 00:19:05.951 12317.610 - 12370.249: 97.8924% ( 7) 00:19:05.951 12370.249 - 12422.888: 97.9360% ( 6) 00:19:05.951 12422.888 - 12475.528: 97.9578% ( 3) 00:19:05.951 12475.528 - 12528.167: 97.9869% ( 4) 00:19:05.951 12528.167 - 12580.806: 98.0160% ( 4) 00:19:05.951 12580.806 - 12633.446: 98.0451% ( 4) 00:19:05.951 12633.446 - 12686.085: 98.0669% ( 3) 00:19:05.951 12686.085 - 12738.724: 98.0887% ( 3) 00:19:05.951 12738.724 - 12791.364: 98.1177% ( 4) 00:19:05.951 12791.364 - 12844.003: 98.1468% ( 4) 00:19:05.951 12844.003 - 12896.643: 98.1686% ( 3) 00:19:05.951 12896.643 - 12949.282: 98.1977% ( 4) 00:19:05.951 12949.282 - 13001.921: 98.2195% ( 3) 00:19:05.951 13001.921 - 13054.561: 98.2413% ( 3) 00:19:05.951 13054.561 - 13107.200: 98.2703% ( 4) 00:19:05.951 13107.200 - 13159.839: 98.2994% ( 4) 00:19:05.951 13159.839 - 13212.479: 98.3285% ( 4) 00:19:05.951 13212.479 - 13265.118: 98.3503% ( 3) 00:19:05.951 13265.118 - 13317.757: 98.3794% ( 4) 00:19:05.951 13317.757 - 13370.397: 98.4012% ( 3) 00:19:05.951 13370.397 - 13423.036: 98.4230% ( 3) 00:19:05.951 13423.036 - 13475.676: 98.4520% ( 4) 00:19:05.951 13475.676 - 13580.954: 98.4738% ( 3) 00:19:05.951 13580.954 - 13686.233: 98.5029% ( 4) 00:19:05.951 13686.233 - 13791.512: 98.5247% ( 3) 00:19:05.951 13791.512 - 13896.790: 98.5465% ( 3) 00:19:05.951 13896.790 - 14002.069: 98.5756% ( 4) 00:19:05.951 14002.069 - 14107.348: 98.5974% ( 3) 00:19:05.951 14107.348 - 14212.627: 98.6047% ( 1) 00:19:05.951 15370.692 - 15475.971: 98.6265% ( 3) 00:19:05.951 15475.971 - 15581.250: 98.6555% ( 4) 00:19:05.951 15581.250 - 15686.529: 98.6846% ( 4) 00:19:05.951 15686.529 - 15791.807: 98.7209% ( 5) 00:19:05.951 15791.807 - 15897.086: 98.7500% ( 4) 00:19:05.951 15897.086 - 16002.365: 98.7718% ( 3) 00:19:05.951 16002.365 - 16107.643: 98.8081% ( 5) 00:19:05.951 16107.643 - 16212.922: 98.8372% ( 4) 00:19:05.951 16212.922 - 16318.201: 98.8663% ( 4) 00:19:05.951 16318.201 - 16423.480: 98.8953% ( 4) 00:19:05.951 16423.480 - 16528.758: 98.9390% ( 6) 00:19:05.951 16528.758 - 16634.037: 98.9826% ( 6) 00:19:05.951 16634.037 - 16739.316: 99.0262% ( 6) 00:19:05.951 16739.316 - 16844.594: 99.0625% ( 5) 00:19:05.951 16844.594 - 16949.873: 99.0698% ( 1) 00:19:05.951 43585.388 - 43795.945: 99.0916% ( 3) 00:19:05.951 43795.945 - 44006.503: 99.1352% ( 6) 00:19:05.951 44006.503 - 44217.060: 99.1860% ( 7) 00:19:05.951 44217.060 - 44427.618: 99.2369% ( 7) 00:19:05.951 44427.618 - 44638.175: 99.2878% ( 7) 00:19:05.951 44638.175 - 44848.733: 99.3459% ( 8) 00:19:05.951 44848.733 - 45059.290: 99.3895% ( 6) 00:19:05.951 45059.290 - 45269.847: 99.4404% ( 7) 00:19:05.951 45269.847 - 45480.405: 99.4913% ( 7) 00:19:05.951 45480.405 - 45690.962: 99.5349% ( 6) 00:19:05.951 51586.570 - 51797.128: 99.5494% ( 2) 00:19:05.951 51797.128 - 52007.685: 99.6003% ( 7) 00:19:05.951 52007.685 - 52218.243: 99.6512% ( 7) 00:19:05.951 52218.243 - 52428.800: 99.7020% ( 7) 00:19:05.951 52428.800 - 52639.357: 99.7529% ( 7) 00:19:05.951 52639.357 - 52849.915: 99.8110% ( 8) 00:19:05.951 52849.915 - 53060.472: 99.8619% ( 7) 00:19:05.951 53060.472 - 53271.030: 99.9128% ( 7) 00:19:05.951 53271.030 - 53481.587: 99.9637% ( 7) 00:19:05.951 53481.587 - 53692.145: 100.0000% ( 5) 00:19:05.951 00:19:05.951 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:05.951 ============================================================================== 00:19:05.951 Range in us Cumulative IO count 00:19:05.951 7895.904 - 7948.543: 0.1235% ( 17) 00:19:05.951 7948.543 - 8001.182: 0.3488% ( 31) 00:19:05.951 8001.182 - 8053.822: 0.6468% ( 41) 00:19:05.951 8053.822 - 8106.461: 1.3445% ( 96) 00:19:05.951 8106.461 - 8159.100: 2.2747% ( 128) 00:19:05.951 8159.100 - 8211.740: 3.6119% ( 184) 00:19:05.951 8211.740 - 8264.379: 5.5305% ( 264) 00:19:05.951 8264.379 - 8317.018: 7.8270% ( 316) 00:19:05.951 8317.018 - 8369.658: 10.3634% ( 349) 00:19:05.951 8369.658 - 8422.297: 13.6265% ( 449) 00:19:05.951 8422.297 - 8474.937: 17.3547% ( 513) 00:19:05.951 8474.937 - 8527.576: 21.5189% ( 573) 00:19:05.951 8527.576 - 8580.215: 26.2427% ( 650) 00:19:05.951 8580.215 - 8632.855: 31.1919% ( 681) 00:19:05.951 8632.855 - 8685.494: 36.3445% ( 709) 00:19:05.951 8685.494 - 8738.133: 41.6497% ( 730) 00:19:05.951 8738.133 - 8790.773: 46.8968% ( 722) 00:19:05.951 8790.773 - 8843.412: 52.1439% ( 722) 00:19:05.951 8843.412 - 8896.051: 57.3038% ( 710) 00:19:05.951 8896.051 - 8948.691: 62.2747% ( 684) 00:19:05.951 8948.691 - 9001.330: 66.8823% ( 634) 00:19:05.951 9001.330 - 9053.969: 71.1991% ( 594) 00:19:05.951 9053.969 - 9106.609: 74.8983% ( 509) 00:19:05.951 9106.609 - 9159.248: 78.3285% ( 472) 00:19:05.951 9159.248 - 9211.888: 81.4099% ( 424) 00:19:05.951 9211.888 - 9264.527: 84.1279% ( 374) 00:19:05.951 9264.527 - 9317.166: 86.3299% ( 303) 00:19:05.951 9317.166 - 9369.806: 88.0523% ( 237) 00:19:05.951 9369.806 - 9422.445: 89.3677% ( 181) 00:19:05.951 9422.445 - 9475.084: 90.3416% ( 134) 00:19:05.951 9475.084 - 9527.724: 91.0392% ( 96) 00:19:05.951 9527.724 - 9580.363: 91.5407% ( 69) 00:19:05.951 9580.363 - 9633.002: 92.0058% ( 64) 00:19:05.951 9633.002 - 9685.642: 92.3474% ( 47) 00:19:05.951 9685.642 - 9738.281: 92.6890% ( 47) 00:19:05.951 9738.281 - 9790.920: 92.9360% ( 34) 00:19:05.951 9790.920 - 9843.560: 93.2049% ( 37) 00:19:05.951 9843.560 - 9896.199: 93.4811% ( 38) 00:19:05.951 9896.199 - 9948.839: 93.7137% ( 32) 00:19:05.951 9948.839 - 10001.478: 93.9317% ( 30) 00:19:05.951 10001.478 - 10054.117: 94.1933% ( 36) 00:19:05.951 10054.117 - 10106.757: 94.4259% ( 32) 00:19:05.951 10106.757 - 10159.396: 94.6512% ( 31) 00:19:05.951 10159.396 - 10212.035: 94.8474% ( 27) 00:19:05.951 10212.035 - 10264.675: 95.0436% ( 27) 00:19:05.951 10264.675 - 10317.314: 95.2326% ( 26) 00:19:05.951 10317.314 - 10369.953: 95.4070% ( 24) 00:19:05.951 10369.953 - 10422.593: 95.5596% ( 21) 00:19:05.952 10422.593 - 10475.232: 95.6541% ( 13) 00:19:05.952 10475.232 - 10527.871: 95.7340% ( 11) 00:19:05.952 10527.871 - 10580.511: 95.7994% ( 9) 00:19:05.952 10580.511 - 10633.150: 95.8721% ( 10) 00:19:05.952 10633.150 - 10685.790: 95.9666% ( 13) 00:19:05.952 10685.790 - 10738.429: 96.0465% ( 11) 00:19:05.952 10738.429 - 10791.068: 96.1265% ( 11) 00:19:05.952 10791.068 - 10843.708: 96.2137% ( 12) 00:19:05.952 10843.708 - 10896.347: 96.3372% ( 17) 00:19:05.952 10896.347 - 10948.986: 96.4390% ( 14) 00:19:05.952 10948.986 - 11001.626: 96.5698% ( 18) 00:19:05.952 11001.626 - 11054.265: 96.6642% ( 13) 00:19:05.952 11054.265 - 11106.904: 96.7587% ( 13) 00:19:05.952 11106.904 - 11159.544: 96.8387% ( 11) 00:19:05.952 11159.544 - 11212.183: 96.9259% ( 12) 00:19:05.952 11212.183 - 11264.822: 97.0058% ( 11) 00:19:05.952 11264.822 - 11317.462: 97.0930% ( 12) 00:19:05.952 11317.462 - 11370.101: 97.1657% ( 10) 00:19:05.952 11370.101 - 11422.741: 97.2384% ( 10) 00:19:05.952 11422.741 - 11475.380: 97.3256% ( 12) 00:19:05.952 11475.380 - 11528.019: 97.3910% ( 9) 00:19:05.952 11528.019 - 11580.659: 97.4491% ( 8) 00:19:05.952 11580.659 - 11633.298: 97.5073% ( 8) 00:19:05.952 11633.298 - 11685.937: 97.5654% ( 8) 00:19:05.952 11685.937 - 11738.577: 97.6235% ( 8) 00:19:05.952 11738.577 - 11791.216: 97.6526% ( 4) 00:19:05.952 11791.216 - 11843.855: 97.6744% ( 3) 00:19:05.952 11949.134 - 12001.773: 97.6890% ( 2) 00:19:05.952 12001.773 - 12054.413: 97.7035% ( 2) 00:19:05.952 12054.413 - 12107.052: 97.7253% ( 3) 00:19:05.952 12107.052 - 12159.692: 97.7398% ( 2) 00:19:05.952 12159.692 - 12212.331: 97.7616% ( 3) 00:19:05.952 12212.331 - 12264.970: 97.7834% ( 3) 00:19:05.952 12264.970 - 12317.610: 97.8198% ( 5) 00:19:05.952 12317.610 - 12370.249: 97.8488% ( 4) 00:19:05.952 12370.249 - 12422.888: 97.8779% ( 4) 00:19:05.952 12422.888 - 12475.528: 97.8997% ( 3) 00:19:05.952 12475.528 - 12528.167: 97.9360% ( 5) 00:19:05.952 12528.167 - 12580.806: 97.9651% ( 4) 00:19:05.952 12580.806 - 12633.446: 97.9797% ( 2) 00:19:05.952 12633.446 - 12686.085: 98.0233% ( 6) 00:19:05.952 12686.085 - 12738.724: 98.0451% ( 3) 00:19:05.952 12738.724 - 12791.364: 98.0741% ( 4) 00:19:05.952 12791.364 - 12844.003: 98.1032% ( 4) 00:19:05.952 12844.003 - 12896.643: 98.1250% ( 3) 00:19:05.952 12896.643 - 12949.282: 98.1541% ( 4) 00:19:05.952 12949.282 - 13001.921: 98.1759% ( 3) 00:19:05.952 13001.921 - 13054.561: 98.2049% ( 4) 00:19:05.952 13054.561 - 13107.200: 98.2340% ( 4) 00:19:05.952 13107.200 - 13159.839: 98.2558% ( 3) 00:19:05.952 13159.839 - 13212.479: 98.2849% ( 4) 00:19:05.952 13212.479 - 13265.118: 98.3140% ( 4) 00:19:05.952 13265.118 - 13317.757: 98.3358% ( 3) 00:19:05.952 13317.757 - 13370.397: 98.3648% ( 4) 00:19:05.952 13370.397 - 13423.036: 98.3939% ( 4) 00:19:05.952 13423.036 - 13475.676: 98.4157% ( 3) 00:19:05.952 13475.676 - 13580.954: 98.4593% ( 6) 00:19:05.952 13580.954 - 13686.233: 98.5174% ( 8) 00:19:05.952 13686.233 - 13791.512: 98.5610% ( 6) 00:19:05.952 13791.512 - 13896.790: 98.5901% ( 4) 00:19:05.952 13896.790 - 14002.069: 98.6047% ( 2) 00:19:05.952 14633.741 - 14739.020: 98.6483% ( 6) 00:19:05.952 14739.020 - 14844.299: 98.6701% ( 3) 00:19:05.952 14844.299 - 14949.578: 98.6919% ( 3) 00:19:05.952 14949.578 - 15054.856: 98.7137% ( 3) 00:19:05.952 15054.856 - 15160.135: 98.7355% ( 3) 00:19:05.952 15160.135 - 15265.414: 98.7573% ( 3) 00:19:05.952 15265.414 - 15370.692: 98.7863% ( 4) 00:19:05.952 15370.692 - 15475.971: 98.8154% ( 4) 00:19:05.952 15475.971 - 15581.250: 98.8372% ( 3) 00:19:05.952 15581.250 - 15686.529: 98.8663% ( 4) 00:19:05.952 15686.529 - 15791.807: 98.8881% ( 3) 00:19:05.952 15791.807 - 15897.086: 98.9172% ( 4) 00:19:05.952 15897.086 - 16002.365: 98.9462% ( 4) 00:19:05.952 16002.365 - 16107.643: 98.9680% ( 3) 00:19:05.952 16107.643 - 16212.922: 98.9971% ( 4) 00:19:05.952 16212.922 - 16318.201: 99.0189% ( 3) 00:19:05.952 16318.201 - 16423.480: 99.0480% ( 4) 00:19:05.952 16423.480 - 16528.758: 99.0625% ( 2) 00:19:05.952 16528.758 - 16634.037: 99.0698% ( 1) 00:19:05.952 42743.158 - 42953.716: 99.0843% ( 2) 00:19:05.952 42953.716 - 43164.273: 99.1352% ( 7) 00:19:05.952 43164.273 - 43374.831: 99.1860% ( 7) 00:19:05.952 43374.831 - 43585.388: 99.2369% ( 7) 00:19:05.952 43585.388 - 43795.945: 99.2951% ( 8) 00:19:05.952 43795.945 - 44006.503: 99.3459% ( 7) 00:19:05.952 44006.503 - 44217.060: 99.4041% ( 8) 00:19:05.952 44217.060 - 44427.618: 99.4549% ( 7) 00:19:05.952 44427.618 - 44638.175: 99.5058% ( 7) 00:19:05.952 44638.175 - 44848.733: 99.5349% ( 4) 00:19:05.952 50323.226 - 50533.783: 99.5494% ( 2) 00:19:05.952 50533.783 - 50744.341: 99.6003% ( 7) 00:19:05.952 50744.341 - 50954.898: 99.6512% ( 7) 00:19:05.952 50954.898 - 51165.455: 99.7020% ( 7) 00:19:05.952 51165.455 - 51376.013: 99.7456% ( 6) 00:19:05.952 51376.013 - 51586.570: 99.8038% ( 8) 00:19:05.952 51586.570 - 51797.128: 99.8474% ( 6) 00:19:05.952 51797.128 - 52007.685: 99.9055% ( 8) 00:19:05.952 52007.685 - 52218.243: 99.9564% ( 7) 00:19:05.952 52218.243 - 52428.800: 100.0000% ( 6) 00:19:05.952 00:19:05.952 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:05.952 ============================================================================== 00:19:05.952 Range in us Cumulative IO count 00:19:05.952 7843.264 - 7895.904: 0.0218% ( 3) 00:19:05.952 7895.904 - 7948.543: 0.1163% ( 13) 00:19:05.952 7948.543 - 8001.182: 0.3343% ( 30) 00:19:05.952 8001.182 - 8053.822: 0.7413% ( 56) 00:19:05.952 8053.822 - 8106.461: 1.3517% ( 84) 00:19:05.952 8106.461 - 8159.100: 2.3038% ( 131) 00:19:05.952 8159.100 - 8211.740: 3.6410% ( 184) 00:19:05.952 8211.740 - 8264.379: 5.4360% ( 247) 00:19:05.952 8264.379 - 8317.018: 7.8270% ( 329) 00:19:05.952 8317.018 - 8369.658: 10.6323% ( 386) 00:19:05.952 8369.658 - 8422.297: 13.8808% ( 447) 00:19:05.952 8422.297 - 8474.937: 17.5727% ( 508) 00:19:05.952 8474.937 - 8527.576: 21.7878% ( 580) 00:19:05.952 8527.576 - 8580.215: 26.2282% ( 611) 00:19:05.952 8580.215 - 8632.855: 31.1410% ( 676) 00:19:05.952 8632.855 - 8685.494: 36.2137% ( 698) 00:19:05.952 8685.494 - 8738.133: 41.5770% ( 738) 00:19:05.952 8738.133 - 8790.773: 46.9404% ( 738) 00:19:05.952 8790.773 - 8843.412: 52.0858% ( 708) 00:19:05.952 8843.412 - 8896.051: 57.3110% ( 719) 00:19:05.952 8896.051 - 8948.691: 62.2238% ( 676) 00:19:05.952 8948.691 - 9001.330: 66.8968% ( 643) 00:19:05.952 9001.330 - 9053.969: 71.0538% ( 572) 00:19:05.952 9053.969 - 9106.609: 74.9419% ( 535) 00:19:05.952 9106.609 - 9159.248: 78.2922% ( 461) 00:19:05.952 9159.248 - 9211.888: 81.3009% ( 414) 00:19:05.952 9211.888 - 9264.527: 83.9680% ( 367) 00:19:05.952 9264.527 - 9317.166: 86.1265% ( 297) 00:19:05.952 9317.166 - 9369.806: 87.8270% ( 234) 00:19:05.952 9369.806 - 9422.445: 89.1352% ( 180) 00:19:05.952 9422.445 - 9475.084: 90.1526% ( 140) 00:19:05.952 9475.084 - 9527.724: 90.8503% ( 96) 00:19:05.952 9527.724 - 9580.363: 91.3808% ( 73) 00:19:05.952 9580.363 - 9633.002: 91.7442% ( 50) 00:19:05.952 9633.002 - 9685.642: 92.1294% ( 53) 00:19:05.952 9685.642 - 9738.281: 92.4346% ( 42) 00:19:05.952 9738.281 - 9790.920: 92.7326% ( 41) 00:19:05.952 9790.920 - 9843.560: 93.0233% ( 40) 00:19:05.952 9843.560 - 9896.199: 93.2849% ( 36) 00:19:05.952 9896.199 - 9948.839: 93.5247% ( 33) 00:19:05.952 9948.839 - 10001.478: 93.7355% ( 29) 00:19:05.952 10001.478 - 10054.117: 93.9390% ( 28) 00:19:05.952 10054.117 - 10106.757: 94.1352% ( 27) 00:19:05.952 10106.757 - 10159.396: 94.3387% ( 28) 00:19:05.952 10159.396 - 10212.035: 94.5349% ( 27) 00:19:05.952 10212.035 - 10264.675: 94.7093% ( 24) 00:19:05.952 10264.675 - 10317.314: 94.9201% ( 29) 00:19:05.952 10317.314 - 10369.953: 95.1163% ( 27) 00:19:05.952 10369.953 - 10422.593: 95.2834% ( 23) 00:19:05.952 10422.593 - 10475.232: 95.4506% ( 23) 00:19:05.952 10475.232 - 10527.871: 95.5887% ( 19) 00:19:05.952 10527.871 - 10580.511: 95.7049% ( 16) 00:19:05.952 10580.511 - 10633.150: 95.8140% ( 15) 00:19:05.952 10633.150 - 10685.790: 95.9157% ( 14) 00:19:05.952 10685.790 - 10738.429: 96.0247% ( 15) 00:19:05.952 10738.429 - 10791.068: 96.1337% ( 15) 00:19:05.952 10791.068 - 10843.708: 96.2282% ( 13) 00:19:05.952 10843.708 - 10896.347: 96.3372% ( 15) 00:19:05.952 10896.347 - 10948.986: 96.4680% ( 18) 00:19:05.952 10948.986 - 11001.626: 96.5843% ( 16) 00:19:05.952 11001.626 - 11054.265: 96.7078% ( 17) 00:19:05.952 11054.265 - 11106.904: 96.8096% ( 14) 00:19:05.952 11106.904 - 11159.544: 96.9186% ( 15) 00:19:05.952 11159.544 - 11212.183: 97.0203% ( 14) 00:19:05.952 11212.183 - 11264.822: 97.1148% ( 13) 00:19:05.952 11264.822 - 11317.462: 97.1875% ( 10) 00:19:05.952 11317.462 - 11370.101: 97.2456% ( 8) 00:19:05.952 11370.101 - 11422.741: 97.3110% ( 9) 00:19:05.952 11422.741 - 11475.380: 97.3837% ( 10) 00:19:05.952 11475.380 - 11528.019: 97.4491% ( 9) 00:19:05.952 11528.019 - 11580.659: 97.5145% ( 9) 00:19:05.952 11580.659 - 11633.298: 97.5727% ( 8) 00:19:05.952 11633.298 - 11685.937: 97.6453% ( 10) 00:19:05.952 11685.937 - 11738.577: 97.7180% ( 10) 00:19:05.952 11738.577 - 11791.216: 97.7544% ( 5) 00:19:05.952 11791.216 - 11843.855: 97.7907% ( 5) 00:19:05.952 11843.855 - 11896.495: 97.8125% ( 3) 00:19:05.952 11896.495 - 11949.134: 97.8270% ( 2) 00:19:05.952 11949.134 - 12001.773: 97.8416% ( 2) 00:19:05.952 12001.773 - 12054.413: 97.8488% ( 1) 00:19:05.952 12054.413 - 12107.052: 97.8634% ( 2) 00:19:05.952 12107.052 - 12159.692: 97.8779% ( 2) 00:19:05.952 12159.692 - 12212.331: 97.8924% ( 2) 00:19:05.952 12212.331 - 12264.970: 97.8997% ( 1) 00:19:05.952 12264.970 - 12317.610: 97.9070% ( 1) 00:19:05.952 12317.610 - 12370.249: 97.9215% ( 2) 00:19:05.952 12370.249 - 12422.888: 97.9360% ( 2) 00:19:05.952 12422.888 - 12475.528: 97.9433% ( 1) 00:19:05.952 12475.528 - 12528.167: 97.9506% ( 1) 00:19:05.952 12528.167 - 12580.806: 97.9651% ( 2) 00:19:05.952 12580.806 - 12633.446: 97.9724% ( 1) 00:19:05.952 12633.446 - 12686.085: 97.9869% ( 2) 00:19:05.952 12686.085 - 12738.724: 97.9942% ( 1) 00:19:05.952 12738.724 - 12791.364: 98.0087% ( 2) 00:19:05.952 12791.364 - 12844.003: 98.0160% ( 1) 00:19:05.952 12844.003 - 12896.643: 98.0451% ( 4) 00:19:05.952 12896.643 - 12949.282: 98.0741% ( 4) 00:19:05.952 12949.282 - 13001.921: 98.0959% ( 3) 00:19:05.953 13001.921 - 13054.561: 98.1250% ( 4) 00:19:05.953 13054.561 - 13107.200: 98.1541% ( 4) 00:19:05.953 13107.200 - 13159.839: 98.1759% ( 3) 00:19:05.953 13159.839 - 13212.479: 98.2049% ( 4) 00:19:05.953 13212.479 - 13265.118: 98.2413% ( 5) 00:19:05.953 13265.118 - 13317.757: 98.2631% ( 3) 00:19:05.953 13317.757 - 13370.397: 98.2922% ( 4) 00:19:05.953 13370.397 - 13423.036: 98.2994% ( 1) 00:19:05.953 13423.036 - 13475.676: 98.3067% ( 1) 00:19:05.953 13475.676 - 13580.954: 98.3285% ( 3) 00:19:05.953 13580.954 - 13686.233: 98.3576% ( 4) 00:19:05.953 13686.233 - 13791.512: 98.3866% ( 4) 00:19:05.953 13791.512 - 13896.790: 98.4157% ( 4) 00:19:05.953 13896.790 - 14002.069: 98.4448% ( 4) 00:19:05.953 14002.069 - 14107.348: 98.4738% ( 4) 00:19:05.953 14107.348 - 14212.627: 98.5320% ( 8) 00:19:05.953 14212.627 - 14317.905: 98.5828% ( 7) 00:19:05.953 14317.905 - 14423.184: 98.6555% ( 10) 00:19:05.953 14423.184 - 14528.463: 98.7137% ( 8) 00:19:05.953 14528.463 - 14633.741: 98.7355% ( 3) 00:19:05.953 14633.741 - 14739.020: 98.7573% ( 3) 00:19:05.953 14739.020 - 14844.299: 98.7863% ( 4) 00:19:05.953 14844.299 - 14949.578: 98.8081% ( 3) 00:19:05.953 14949.578 - 15054.856: 98.8299% ( 3) 00:19:05.953 15054.856 - 15160.135: 98.8590% ( 4) 00:19:05.953 15160.135 - 15265.414: 98.8735% ( 2) 00:19:05.953 15265.414 - 15370.692: 98.9026% ( 4) 00:19:05.953 15370.692 - 15475.971: 98.9244% ( 3) 00:19:05.953 15475.971 - 15581.250: 98.9535% ( 4) 00:19:05.953 15581.250 - 15686.529: 98.9826% ( 4) 00:19:05.953 15686.529 - 15791.807: 99.0044% ( 3) 00:19:05.953 15791.807 - 15897.086: 99.0334% ( 4) 00:19:05.953 15897.086 - 16002.365: 99.0552% ( 3) 00:19:05.953 16002.365 - 16107.643: 99.0698% ( 2) 00:19:05.953 40637.584 - 40848.141: 99.0988% ( 4) 00:19:05.953 40848.141 - 41058.699: 99.1424% ( 6) 00:19:05.953 41058.699 - 41269.256: 99.1933% ( 7) 00:19:05.953 41269.256 - 41479.814: 99.2515% ( 8) 00:19:05.953 41479.814 - 41690.371: 99.2951% ( 6) 00:19:05.953 41690.371 - 41900.929: 99.3532% ( 8) 00:19:05.953 41900.929 - 42111.486: 99.4041% ( 7) 00:19:05.953 42111.486 - 42322.043: 99.4549% ( 7) 00:19:05.953 42322.043 - 42532.601: 99.5058% ( 7) 00:19:05.953 42532.601 - 42743.158: 99.5349% ( 4) 00:19:05.953 48217.651 - 48428.209: 99.5640% ( 4) 00:19:05.953 48428.209 - 48638.766: 99.6148% ( 7) 00:19:05.953 48638.766 - 48849.324: 99.6657% ( 7) 00:19:05.953 48849.324 - 49059.881: 99.7093% ( 6) 00:19:05.953 49059.881 - 49270.439: 99.7529% ( 6) 00:19:05.953 49270.439 - 49480.996: 99.8038% ( 7) 00:19:05.953 49480.996 - 49691.553: 99.8547% ( 7) 00:19:05.953 49691.553 - 49902.111: 99.8983% ( 6) 00:19:05.953 49902.111 - 50112.668: 99.9564% ( 8) 00:19:05.953 50112.668 - 50323.226: 100.0000% ( 6) 00:19:05.953 00:19:05.953 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:05.953 ============================================================================== 00:19:05.953 Range in us Cumulative IO count 00:19:05.953 7895.904 - 7948.543: 0.0945% ( 13) 00:19:05.953 7948.543 - 8001.182: 0.2616% ( 23) 00:19:05.953 8001.182 - 8053.822: 0.6977% ( 60) 00:19:05.953 8053.822 - 8106.461: 1.3808% ( 94) 00:19:05.953 8106.461 - 8159.100: 2.3183% ( 129) 00:19:05.953 8159.100 - 8211.740: 3.5974% ( 176) 00:19:05.953 8211.740 - 8264.379: 5.5087% ( 263) 00:19:05.953 8264.379 - 8317.018: 7.8706% ( 325) 00:19:05.953 8317.018 - 8369.658: 10.6250% ( 379) 00:19:05.953 8369.658 - 8422.297: 14.0044% ( 465) 00:19:05.953 8422.297 - 8474.937: 17.7035% ( 509) 00:19:05.953 8474.937 - 8527.576: 21.7224% ( 553) 00:19:05.953 8527.576 - 8580.215: 26.3663% ( 639) 00:19:05.953 8580.215 - 8632.855: 31.2137% ( 667) 00:19:05.953 8632.855 - 8685.494: 36.3517% ( 707) 00:19:05.953 8685.494 - 8738.133: 41.6206% ( 725) 00:19:05.953 8738.133 - 8790.773: 47.0712% ( 750) 00:19:05.953 8790.773 - 8843.412: 52.3038% ( 720) 00:19:05.953 8843.412 - 8896.051: 57.4564% ( 709) 00:19:05.953 8896.051 - 8948.691: 62.2456% ( 659) 00:19:05.953 8948.691 - 9001.330: 66.8895% ( 639) 00:19:05.953 9001.330 - 9053.969: 71.1773% ( 590) 00:19:05.953 9053.969 - 9106.609: 74.9782% ( 523) 00:19:05.953 9106.609 - 9159.248: 78.3721% ( 467) 00:19:05.953 9159.248 - 9211.888: 81.3227% ( 406) 00:19:05.953 9211.888 - 9264.527: 83.8517% ( 348) 00:19:05.953 9264.527 - 9317.166: 85.8285% ( 272) 00:19:05.953 9317.166 - 9369.806: 87.5945% ( 243) 00:19:05.953 9369.806 - 9422.445: 88.9172% ( 182) 00:19:05.953 9422.445 - 9475.084: 89.9346% ( 140) 00:19:05.953 9475.084 - 9527.724: 90.7267% ( 109) 00:19:05.953 9527.724 - 9580.363: 91.1773% ( 62) 00:19:05.953 9580.363 - 9633.002: 91.5480% ( 51) 00:19:05.953 9633.002 - 9685.642: 91.8895% ( 47) 00:19:05.953 9685.642 - 9738.281: 92.2311% ( 47) 00:19:05.953 9738.281 - 9790.920: 92.5581% ( 45) 00:19:05.953 9790.920 - 9843.560: 92.8198% ( 36) 00:19:05.953 9843.560 - 9896.199: 93.0814% ( 36) 00:19:05.953 9896.199 - 9948.839: 93.2776% ( 27) 00:19:05.953 9948.839 - 10001.478: 93.4666% ( 26) 00:19:05.953 10001.478 - 10054.117: 93.6701% ( 28) 00:19:05.953 10054.117 - 10106.757: 93.8445% ( 24) 00:19:05.953 10106.757 - 10159.396: 94.0916% ( 34) 00:19:05.953 10159.396 - 10212.035: 94.3023% ( 29) 00:19:05.953 10212.035 - 10264.675: 94.5203% ( 30) 00:19:05.953 10264.675 - 10317.314: 94.7238% ( 28) 00:19:05.953 10317.314 - 10369.953: 94.9491% ( 31) 00:19:05.953 10369.953 - 10422.593: 95.1526% ( 28) 00:19:05.953 10422.593 - 10475.232: 95.3416% ( 26) 00:19:05.953 10475.232 - 10527.871: 95.5015% ( 22) 00:19:05.953 10527.871 - 10580.511: 95.6323% ( 18) 00:19:05.953 10580.511 - 10633.150: 95.7703% ( 19) 00:19:05.953 10633.150 - 10685.790: 95.8794% ( 15) 00:19:05.953 10685.790 - 10738.429: 95.9811% ( 14) 00:19:05.953 10738.429 - 10791.068: 96.0901% ( 15) 00:19:05.953 10791.068 - 10843.708: 96.1773% ( 12) 00:19:05.953 10843.708 - 10896.347: 96.2718% ( 13) 00:19:05.953 10896.347 - 10948.986: 96.4390% ( 23) 00:19:05.953 10948.986 - 11001.626: 96.5916% ( 21) 00:19:05.953 11001.626 - 11054.265: 96.7006% ( 15) 00:19:05.953 11054.265 - 11106.904: 96.8023% ( 14) 00:19:05.953 11106.904 - 11159.544: 96.9186% ( 16) 00:19:05.953 11159.544 - 11212.183: 97.0131% ( 13) 00:19:05.953 11212.183 - 11264.822: 97.1148% ( 14) 00:19:05.953 11264.822 - 11317.462: 97.2238% ( 15) 00:19:05.953 11317.462 - 11370.101: 97.3110% ( 12) 00:19:05.953 11370.101 - 11422.741: 97.3910% ( 11) 00:19:05.953 11422.741 - 11475.380: 97.4782% ( 12) 00:19:05.953 11475.380 - 11528.019: 97.5654% ( 12) 00:19:05.953 11528.019 - 11580.659: 97.6381% ( 10) 00:19:05.953 11580.659 - 11633.298: 97.7035% ( 9) 00:19:05.953 11633.298 - 11685.937: 97.7689% ( 9) 00:19:05.953 11685.937 - 11738.577: 97.8343% ( 9) 00:19:05.953 11738.577 - 11791.216: 97.8634% ( 4) 00:19:05.953 11791.216 - 11843.855: 97.8997% ( 5) 00:19:05.953 11843.855 - 11896.495: 97.9142% ( 2) 00:19:05.953 11896.495 - 11949.134: 97.9288% ( 2) 00:19:05.953 11949.134 - 12001.773: 97.9433% ( 2) 00:19:05.953 12001.773 - 12054.413: 97.9506% ( 1) 00:19:05.953 12054.413 - 12107.052: 97.9651% ( 2) 00:19:05.953 12107.052 - 12159.692: 97.9797% ( 2) 00:19:05.953 12159.692 - 12212.331: 97.9942% ( 2) 00:19:05.953 12212.331 - 12264.970: 98.0015% ( 1) 00:19:05.953 12264.970 - 12317.610: 98.0160% ( 2) 00:19:05.953 12317.610 - 12370.249: 98.0233% ( 1) 00:19:05.953 12370.249 - 12422.888: 98.0378% ( 2) 00:19:05.953 12422.888 - 12475.528: 98.0523% ( 2) 00:19:05.953 12475.528 - 12528.167: 98.0669% ( 2) 00:19:05.953 12528.167 - 12580.806: 98.0814% ( 2) 00:19:05.953 12580.806 - 12633.446: 98.0887% ( 1) 00:19:05.953 12633.446 - 12686.085: 98.1032% ( 2) 00:19:05.953 12686.085 - 12738.724: 98.1177% ( 2) 00:19:05.953 12738.724 - 12791.364: 98.1323% ( 2) 00:19:05.953 12791.364 - 12844.003: 98.1395% ( 1) 00:19:05.953 13475.676 - 13580.954: 98.1541% ( 2) 00:19:05.953 13580.954 - 13686.233: 98.1977% ( 6) 00:19:05.953 13686.233 - 13791.512: 98.2703% ( 10) 00:19:05.953 13791.512 - 13896.790: 98.3140% ( 6) 00:19:05.953 13896.790 - 14002.069: 98.3576% ( 6) 00:19:05.953 14002.069 - 14107.348: 98.4084% ( 7) 00:19:05.953 14107.348 - 14212.627: 98.4738% ( 9) 00:19:05.953 14212.627 - 14317.905: 98.5320% ( 8) 00:19:05.953 14317.905 - 14423.184: 98.5828% ( 7) 00:19:05.953 14423.184 - 14528.463: 98.6410% ( 8) 00:19:05.953 14528.463 - 14633.741: 98.7064% ( 9) 00:19:05.953 14633.741 - 14739.020: 98.7573% ( 7) 00:19:05.953 14739.020 - 14844.299: 98.8154% ( 8) 00:19:05.953 14844.299 - 14949.578: 98.8663% ( 7) 00:19:05.953 14949.578 - 15054.856: 98.9244% ( 8) 00:19:05.953 15054.856 - 15160.135: 98.9608% ( 5) 00:19:05.953 15160.135 - 15265.414: 98.9826% ( 3) 00:19:05.953 15265.414 - 15370.692: 99.0116% ( 4) 00:19:05.953 15370.692 - 15475.971: 99.0334% ( 3) 00:19:05.953 15475.971 - 15581.250: 99.0625% ( 4) 00:19:05.953 15581.250 - 15686.529: 99.0698% ( 1) 00:19:05.953 38532.010 - 38742.567: 99.0843% ( 2) 00:19:05.953 38742.567 - 38953.124: 99.1279% ( 6) 00:19:05.953 38953.124 - 39163.682: 99.1860% ( 8) 00:19:05.953 39163.682 - 39374.239: 99.2442% ( 8) 00:19:05.953 39374.239 - 39584.797: 99.2878% ( 6) 00:19:05.953 39584.797 - 39795.354: 99.3459% ( 8) 00:19:05.953 39795.354 - 40005.912: 99.3968% ( 7) 00:19:05.953 40005.912 - 40216.469: 99.4404% ( 6) 00:19:05.953 40216.469 - 40427.027: 99.4913% ( 7) 00:19:05.953 40427.027 - 40637.584: 99.5349% ( 6) 00:19:05.953 45901.520 - 46112.077: 99.5422% ( 1) 00:19:05.953 46112.077 - 46322.635: 99.5858% ( 6) 00:19:05.953 46322.635 - 46533.192: 99.6366% ( 7) 00:19:05.953 46533.192 - 46743.749: 99.6802% ( 6) 00:19:05.953 46743.749 - 46954.307: 99.7311% ( 7) 00:19:05.953 46954.307 - 47164.864: 99.7820% ( 7) 00:19:05.953 47164.864 - 47375.422: 99.8328% ( 7) 00:19:05.953 47375.422 - 47585.979: 99.8837% ( 7) 00:19:05.953 47585.979 - 47796.537: 99.9346% ( 7) 00:19:05.953 47796.537 - 48007.094: 99.9855% ( 7) 00:19:05.953 48007.094 - 48217.651: 100.0000% ( 2) 00:19:05.953 00:19:05.953 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:05.953 ============================================================================== 00:19:05.953 Range in us Cumulative IO count 00:19:05.953 7843.264 - 7895.904: 0.0145% ( 2) 00:19:05.953 7895.904 - 7948.543: 0.0940% ( 11) 00:19:05.953 7948.543 - 8001.182: 0.2098% ( 16) 00:19:05.953 8001.182 - 8053.822: 0.6944% ( 67) 00:19:05.953 8053.822 - 8106.461: 1.3455% ( 90) 00:19:05.954 8106.461 - 8159.100: 2.3365% ( 137) 00:19:05.954 8159.100 - 8211.740: 3.7833% ( 200) 00:19:05.954 8211.740 - 8264.379: 5.5628% ( 246) 00:19:05.954 8264.379 - 8317.018: 7.8848% ( 321) 00:19:05.954 8317.018 - 8369.658: 10.6120% ( 377) 00:19:05.954 8369.658 - 8422.297: 13.8744% ( 451) 00:19:05.954 8422.297 - 8474.937: 17.5781% ( 512) 00:19:05.954 8474.937 - 8527.576: 21.6435% ( 562) 00:19:05.954 8527.576 - 8580.215: 26.2659% ( 639) 00:19:05.954 8580.215 - 8632.855: 31.2066% ( 683) 00:19:05.954 8632.855 - 8685.494: 36.4005% ( 718) 00:19:05.954 8685.494 - 8738.133: 41.7390% ( 738) 00:19:05.954 8738.133 - 8790.773: 47.0703% ( 737) 00:19:05.954 8790.773 - 8843.412: 52.2135% ( 711) 00:19:05.954 8843.412 - 8896.051: 57.2989% ( 703) 00:19:05.954 8896.051 - 8948.691: 62.0009% ( 650) 00:19:05.954 8948.691 - 9001.330: 66.5799% ( 633) 00:19:05.954 9001.330 - 9053.969: 70.7031% ( 570) 00:19:05.954 9053.969 - 9106.609: 74.4647% ( 520) 00:19:05.954 9106.609 - 9159.248: 77.9297% ( 479) 00:19:05.954 9159.248 - 9211.888: 80.8666% ( 406) 00:19:05.954 9211.888 - 9264.527: 83.3623% ( 345) 00:19:05.954 9264.527 - 9317.166: 85.4673% ( 291) 00:19:05.954 9317.166 - 9369.806: 87.1817% ( 237) 00:19:05.954 9369.806 - 9422.445: 88.5055% ( 183) 00:19:05.954 9422.445 - 9475.084: 89.4821% ( 135) 00:19:05.954 9475.084 - 9527.724: 90.2344% ( 104) 00:19:05.954 9527.724 - 9580.363: 90.7190% ( 67) 00:19:05.954 9580.363 - 9633.002: 91.1458% ( 59) 00:19:05.954 9633.002 - 9685.642: 91.4931% ( 48) 00:19:05.954 9685.642 - 9738.281: 91.8113% ( 44) 00:19:05.954 9738.281 - 9790.920: 92.1586% ( 48) 00:19:05.954 9790.920 - 9843.560: 92.4334% ( 38) 00:19:05.954 9843.560 - 9896.199: 92.6505% ( 30) 00:19:05.954 9896.199 - 9948.839: 92.8530% ( 28) 00:19:05.954 9948.839 - 10001.478: 93.0411% ( 26) 00:19:05.954 10001.478 - 10054.117: 93.2436% ( 28) 00:19:05.954 10054.117 - 10106.757: 93.4389% ( 27) 00:19:05.954 10106.757 - 10159.396: 93.6198% ( 25) 00:19:05.954 10159.396 - 10212.035: 93.8223% ( 28) 00:19:05.954 10212.035 - 10264.675: 94.0249% ( 28) 00:19:05.954 10264.675 - 10317.314: 94.2130% ( 26) 00:19:05.954 10317.314 - 10369.953: 94.4227% ( 29) 00:19:05.954 10369.953 - 10422.593: 94.6470% ( 31) 00:19:05.954 10422.593 - 10475.232: 94.8206% ( 24) 00:19:05.954 10475.232 - 10527.871: 95.0087% ( 26) 00:19:05.954 10527.871 - 10580.511: 95.1461% ( 19) 00:19:05.954 10580.511 - 10633.150: 95.2836% ( 19) 00:19:05.954 10633.150 - 10685.790: 95.3704% ( 12) 00:19:05.954 10685.790 - 10738.429: 95.4644% ( 13) 00:19:05.954 10738.429 - 10791.068: 95.5584% ( 13) 00:19:05.954 10791.068 - 10843.708: 95.6380% ( 11) 00:19:05.954 10843.708 - 10896.347: 95.7321% ( 13) 00:19:05.954 10896.347 - 10948.986: 95.8333% ( 14) 00:19:05.954 10948.986 - 11001.626: 95.9635% ( 18) 00:19:05.954 11001.626 - 11054.265: 96.0648% ( 14) 00:19:05.954 11054.265 - 11106.904: 96.1806% ( 16) 00:19:05.954 11106.904 - 11159.544: 96.2674% ( 12) 00:19:05.954 11159.544 - 11212.183: 96.3759% ( 15) 00:19:05.954 11212.183 - 11264.822: 96.4699% ( 13) 00:19:05.954 11264.822 - 11317.462: 96.5784% ( 15) 00:19:05.954 11317.462 - 11370.101: 96.6797% ( 14) 00:19:05.954 11370.101 - 11422.741: 96.7810% ( 14) 00:19:05.954 11422.741 - 11475.380: 96.8822% ( 14) 00:19:05.954 11475.380 - 11528.019: 96.9835% ( 14) 00:19:05.954 11528.019 - 11580.659: 97.0848% ( 14) 00:19:05.954 11580.659 - 11633.298: 97.1933% ( 15) 00:19:05.954 11633.298 - 11685.937: 97.2873% ( 13) 00:19:05.954 11685.937 - 11738.577: 97.3741% ( 12) 00:19:05.954 11738.577 - 11791.216: 97.4465% ( 10) 00:19:05.954 11791.216 - 11843.855: 97.5043% ( 8) 00:19:05.954 11843.855 - 11896.495: 97.5622% ( 8) 00:19:05.954 11896.495 - 11949.134: 97.6056% ( 6) 00:19:05.954 11949.134 - 12001.773: 97.6562% ( 7) 00:19:05.954 12001.773 - 12054.413: 97.6997% ( 6) 00:19:05.954 12054.413 - 12107.052: 97.7286% ( 4) 00:19:05.954 12107.052 - 12159.692: 97.7648% ( 5) 00:19:05.954 12159.692 - 12212.331: 97.8082% ( 6) 00:19:05.954 12212.331 - 12264.970: 97.8516% ( 6) 00:19:05.954 12264.970 - 12317.610: 97.8877% ( 5) 00:19:05.954 12317.610 - 12370.249: 97.9311% ( 6) 00:19:05.954 12370.249 - 12422.888: 97.9601% ( 4) 00:19:05.954 12422.888 - 12475.528: 97.9890% ( 4) 00:19:05.954 12475.528 - 12528.167: 98.0107% ( 3) 00:19:05.954 12528.167 - 12580.806: 98.0396% ( 4) 00:19:05.954 12580.806 - 12633.446: 98.0686% ( 4) 00:19:05.954 12633.446 - 12686.085: 98.0975% ( 4) 00:19:05.954 12686.085 - 12738.724: 98.1264% ( 4) 00:19:05.954 12738.724 - 12791.364: 98.1409% ( 2) 00:19:05.954 12791.364 - 12844.003: 98.1481% ( 1) 00:19:05.954 13265.118 - 13317.757: 98.1554% ( 1) 00:19:05.954 13317.757 - 13370.397: 98.1771% ( 3) 00:19:05.954 13370.397 - 13423.036: 98.1916% ( 2) 00:19:05.954 13423.036 - 13475.676: 98.1988% ( 1) 00:19:05.954 13475.676 - 13580.954: 98.2205% ( 3) 00:19:05.954 13580.954 - 13686.233: 98.2567% ( 5) 00:19:05.954 13686.233 - 13791.512: 98.2784% ( 3) 00:19:05.954 13791.512 - 13896.790: 98.3073% ( 4) 00:19:05.954 13896.790 - 14002.069: 98.3290% ( 3) 00:19:05.954 14002.069 - 14107.348: 98.3941% ( 9) 00:19:05.954 14107.348 - 14212.627: 98.4375% ( 6) 00:19:05.954 14212.627 - 14317.905: 98.4954% ( 8) 00:19:05.954 14317.905 - 14423.184: 98.5460% ( 7) 00:19:05.954 14423.184 - 14528.463: 98.6111% ( 9) 00:19:05.954 14528.463 - 14633.741: 98.6617% ( 7) 00:19:05.954 14633.741 - 14739.020: 98.7196% ( 8) 00:19:05.954 14739.020 - 14844.299: 98.7775% ( 8) 00:19:05.954 14844.299 - 14949.578: 98.8354% ( 8) 00:19:05.954 14949.578 - 15054.856: 98.8860% ( 7) 00:19:05.954 15054.856 - 15160.135: 98.9511% ( 9) 00:19:05.954 15160.135 - 15265.414: 98.9800% ( 4) 00:19:05.954 15265.414 - 15370.692: 99.0090% ( 4) 00:19:05.954 15370.692 - 15475.971: 99.0451% ( 5) 00:19:05.954 15475.971 - 15581.250: 99.0668% ( 3) 00:19:05.954 15581.250 - 15686.529: 99.0741% ( 1) 00:19:05.954 29899.155 - 30109.712: 99.0885% ( 2) 00:19:05.954 30109.712 - 30320.270: 99.1247% ( 5) 00:19:05.954 30320.270 - 30530.827: 99.1753% ( 7) 00:19:05.954 30530.827 - 30741.385: 99.2332% ( 8) 00:19:05.954 30741.385 - 30951.942: 99.2839% ( 7) 00:19:05.954 30951.942 - 31162.500: 99.3345% ( 7) 00:19:05.954 31162.500 - 31373.057: 99.3851% ( 7) 00:19:05.954 31373.057 - 31583.614: 99.4358% ( 7) 00:19:05.954 31583.614 - 31794.172: 99.4864% ( 7) 00:19:05.954 31794.172 - 32004.729: 99.5370% ( 7) 00:19:05.954 38110.895 - 38321.452: 99.5804% ( 6) 00:19:05.954 38321.452 - 38532.010: 99.6383% ( 8) 00:19:05.954 38532.010 - 38742.567: 99.6889% ( 7) 00:19:05.954 38742.567 - 38953.124: 99.7396% ( 7) 00:19:05.954 38953.124 - 39163.682: 99.7902% ( 7) 00:19:05.954 39163.682 - 39374.239: 99.8409% ( 7) 00:19:05.954 39374.239 - 39584.797: 99.8987% ( 8) 00:19:05.954 39584.797 - 39795.354: 99.9494% ( 7) 00:19:05.954 39795.354 - 40005.912: 99.9928% ( 6) 00:19:05.954 40005.912 - 40216.469: 100.0000% ( 1) 00:19:05.954 00:19:05.954 12:21:15 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:19:07.335 Initializing NVMe Controllers 00:19:07.335 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:07.335 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:07.335 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:07.335 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:07.335 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:07.335 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:19:07.335 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:19:07.335 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:19:07.335 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:19:07.335 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:19:07.335 Initialization complete. Launching workers. 00:19:07.335 ======================================================== 00:19:07.335 Latency(us) 00:19:07.335 Device Information : IOPS MiB/s Average min max 00:19:07.335 PCIE (0000:00:10.0) NSID 1 from core 0: 10363.04 121.44 12376.65 7681.53 44204.33 00:19:07.335 PCIE (0000:00:11.0) NSID 1 from core 0: 10363.04 121.44 12355.16 7505.55 42640.89 00:19:07.335 PCIE (0000:00:13.0) NSID 1 from core 0: 10363.04 121.44 12334.04 7606.60 41790.03 00:19:07.335 PCIE (0000:00:12.0) NSID 1 from core 0: 10363.04 121.44 12311.55 7679.54 39930.09 00:19:07.335 PCIE (0000:00:12.0) NSID 2 from core 0: 10363.04 121.44 12289.25 7666.47 38010.21 00:19:07.335 PCIE (0000:00:12.0) NSID 3 from core 0: 10427.01 122.19 12192.09 7724.69 29385.12 00:19:07.335 ======================================================== 00:19:07.335 Total : 62242.19 729.40 12309.67 7505.55 44204.33 00:19:07.335 00:19:07.335 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:07.335 ================================================================================= 00:19:07.335 1.00000% : 8106.461us 00:19:07.335 10.00000% : 9053.969us 00:19:07.335 25.00000% : 9896.199us 00:19:07.335 50.00000% : 11370.101us 00:19:07.335 75.00000% : 14317.905us 00:19:07.335 90.00000% : 16739.316us 00:19:07.335 95.00000% : 17581.545us 00:19:07.335 98.00000% : 19581.841us 00:19:07.335 99.00000% : 34531.418us 00:19:07.335 99.50000% : 42743.158us 00:19:07.335 99.90000% : 44006.503us 00:19:07.335 99.99000% : 44217.060us 00:19:07.335 99.99900% : 44217.060us 00:19:07.335 99.99990% : 44217.060us 00:19:07.335 99.99999% : 44217.060us 00:19:07.335 00:19:07.335 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:07.335 ================================================================================= 00:19:07.335 1.00000% : 7948.543us 00:19:07.335 10.00000% : 9106.609us 00:19:07.335 25.00000% : 9843.560us 00:19:07.335 50.00000% : 11317.462us 00:19:07.335 75.00000% : 14528.463us 00:19:07.335 90.00000% : 16739.316us 00:19:07.335 95.00000% : 17897.382us 00:19:07.335 98.00000% : 19371.284us 00:19:07.335 99.00000% : 32636.402us 00:19:07.335 99.50000% : 41269.256us 00:19:07.335 99.90000% : 42532.601us 00:19:07.335 99.99000% : 42743.158us 00:19:07.335 99.99900% : 42743.158us 00:19:07.335 99.99990% : 42743.158us 00:19:07.335 99.99999% : 42743.158us 00:19:07.335 00:19:07.335 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:07.335 ================================================================================= 00:19:07.335 1.00000% : 8211.740us 00:19:07.335 10.00000% : 9001.330us 00:19:07.335 25.00000% : 9948.839us 00:19:07.335 50.00000% : 11370.101us 00:19:07.335 75.00000% : 14212.627us 00:19:07.335 90.00000% : 16739.316us 00:19:07.335 95.00000% : 17792.103us 00:19:07.335 98.00000% : 19055.447us 00:19:07.335 99.00000% : 32215.287us 00:19:07.335 99.50000% : 40427.027us 00:19:07.335 99.90000% : 41690.371us 00:19:07.335 99.99000% : 41900.929us 00:19:07.335 99.99900% : 41900.929us 00:19:07.335 99.99990% : 41900.929us 00:19:07.335 99.99999% : 41900.929us 00:19:07.335 00:19:07.335 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:07.335 ================================================================================= 00:19:07.335 1.00000% : 8106.461us 00:19:07.335 10.00000% : 9001.330us 00:19:07.335 25.00000% : 9896.199us 00:19:07.335 50.00000% : 11317.462us 00:19:07.335 75.00000% : 14002.069us 00:19:07.335 90.00000% : 16634.037us 00:19:07.335 95.00000% : 17686.824us 00:19:07.335 98.00000% : 19160.726us 00:19:07.335 99.00000% : 30530.827us 00:19:07.335 99.50000% : 38532.010us 00:19:07.335 99.90000% : 39795.354us 00:19:07.335 99.99000% : 40005.912us 00:19:07.335 99.99900% : 40005.912us 00:19:07.335 99.99990% : 40005.912us 00:19:07.335 99.99999% : 40005.912us 00:19:07.335 00:19:07.335 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:07.335 ================================================================================= 00:19:07.335 1.00000% : 8106.461us 00:19:07.335 10.00000% : 9053.969us 00:19:07.335 25.00000% : 9896.199us 00:19:07.335 50.00000% : 11370.101us 00:19:07.335 75.00000% : 14002.069us 00:19:07.335 90.00000% : 16528.758us 00:19:07.335 95.00000% : 17476.267us 00:19:07.335 98.00000% : 19055.447us 00:19:07.335 99.00000% : 28846.368us 00:19:07.335 99.50000% : 36636.993us 00:19:07.335 99.90000% : 37900.337us 00:19:07.335 99.99000% : 38110.895us 00:19:07.335 99.99900% : 38110.895us 00:19:07.335 99.99990% : 38110.895us 00:19:07.335 99.99999% : 38110.895us 00:19:07.335 00:19:07.335 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:07.335 ================================================================================= 00:19:07.335 1.00000% : 8159.100us 00:19:07.335 10.00000% : 9053.969us 00:19:07.335 25.00000% : 9896.199us 00:19:07.335 50.00000% : 11317.462us 00:19:07.335 75.00000% : 14107.348us 00:19:07.335 90.00000% : 16739.316us 00:19:07.335 95.00000% : 17476.267us 00:19:07.335 98.00000% : 19371.284us 00:19:07.335 99.00000% : 20213.513us 00:19:07.335 99.50000% : 28004.138us 00:19:07.335 99.90000% : 29267.483us 00:19:07.335 99.99000% : 29478.040us 00:19:07.335 99.99900% : 29478.040us 00:19:07.335 99.99990% : 29478.040us 00:19:07.335 99.99999% : 29478.040us 00:19:07.335 00:19:07.335 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:07.335 ============================================================================== 00:19:07.335 Range in us Cumulative IO count 00:19:07.335 7632.707 - 7685.346: 0.0096% ( 1) 00:19:07.335 7685.346 - 7737.986: 0.0482% ( 4) 00:19:07.335 7737.986 - 7790.625: 0.0868% ( 4) 00:19:07.335 7790.625 - 7843.264: 0.2218% ( 14) 00:19:07.335 7843.264 - 7895.904: 0.4244% ( 21) 00:19:07.335 7895.904 - 7948.543: 0.5305% ( 11) 00:19:07.335 7948.543 - 8001.182: 0.7427% ( 22) 00:19:07.335 8001.182 - 8053.822: 0.9838% ( 25) 00:19:07.335 8053.822 - 8106.461: 1.1960% ( 22) 00:19:07.335 8106.461 - 8159.100: 1.6300% ( 45) 00:19:07.335 8159.100 - 8211.740: 2.1412% ( 53) 00:19:07.335 8211.740 - 8264.379: 2.4595% ( 33) 00:19:07.335 8264.379 - 8317.018: 2.8067% ( 36) 00:19:07.335 8317.018 - 8369.658: 3.1443% ( 35) 00:19:07.335 8369.658 - 8422.297: 3.6265% ( 50) 00:19:07.335 8422.297 - 8474.937: 4.1956% ( 59) 00:19:07.335 8474.937 - 8527.576: 4.8997% ( 73) 00:19:07.335 8527.576 - 8580.215: 5.4880% ( 61) 00:19:07.335 8580.215 - 8632.855: 6.0185% ( 55) 00:19:07.335 8632.855 - 8685.494: 6.4429% ( 44) 00:19:07.335 8685.494 - 8738.133: 6.8191% ( 39) 00:19:07.335 8738.133 - 8790.773: 7.3881% ( 59) 00:19:07.335 8790.773 - 8843.412: 7.7836% ( 41) 00:19:07.335 8843.412 - 8896.051: 8.2465% ( 48) 00:19:07.335 8896.051 - 8948.691: 8.8542% ( 63) 00:19:07.335 8948.691 - 9001.330: 9.4811% ( 65) 00:19:07.335 9001.330 - 9053.969: 10.1755% ( 72) 00:19:07.335 9053.969 - 9106.609: 10.9954% ( 85) 00:19:07.335 9106.609 - 9159.248: 11.9695% ( 101) 00:19:07.335 9159.248 - 9211.888: 12.5579% ( 61) 00:19:07.335 9211.888 - 9264.527: 13.2427% ( 71) 00:19:07.335 9264.527 - 9317.166: 14.0239% ( 81) 00:19:07.335 9317.166 - 9369.806: 14.8245% ( 83) 00:19:07.335 9369.806 - 9422.445: 15.8758% ( 109) 00:19:07.335 9422.445 - 9475.084: 16.6763% ( 83) 00:19:07.335 9475.084 - 9527.724: 17.7373% ( 110) 00:19:07.335 9527.724 - 9580.363: 19.1165% ( 143) 00:19:07.335 9580.363 - 9633.002: 20.5150% ( 145) 00:19:07.335 9633.002 - 9685.642: 21.7110% ( 124) 00:19:07.335 9685.642 - 9738.281: 22.6948% ( 102) 00:19:07.335 9738.281 - 9790.920: 23.6208% ( 96) 00:19:07.335 9790.920 - 9843.560: 24.7492% ( 117) 00:19:07.335 9843.560 - 9896.199: 25.7716% ( 106) 00:19:07.335 9896.199 - 9948.839: 26.8326% ( 110) 00:19:07.335 9948.839 - 10001.478: 28.1057% ( 132) 00:19:07.336 10001.478 - 10054.117: 29.4850% ( 143) 00:19:07.336 10054.117 - 10106.757: 30.7002% ( 126) 00:19:07.336 10106.757 - 10159.396: 31.6262% ( 96) 00:19:07.336 10159.396 - 10212.035: 32.6485% ( 106) 00:19:07.336 10212.035 - 10264.675: 33.9410% ( 134) 00:19:07.336 10264.675 - 10317.314: 35.2045% ( 131) 00:19:07.336 10317.314 - 10369.953: 36.5258% ( 137) 00:19:07.336 10369.953 - 10422.593: 37.7701% ( 129) 00:19:07.336 10422.593 - 10475.232: 38.7249% ( 99) 00:19:07.336 10475.232 - 10527.871: 39.6894% ( 100) 00:19:07.336 10527.871 - 10580.511: 40.5671% ( 91) 00:19:07.336 10580.511 - 10633.150: 41.4448% ( 91) 00:19:07.336 10633.150 - 10685.790: 42.2550% ( 84) 00:19:07.336 10685.790 - 10738.429: 43.1134% ( 89) 00:19:07.336 10738.429 - 10791.068: 43.9525% ( 87) 00:19:07.336 10791.068 - 10843.708: 44.7917% ( 87) 00:19:07.336 10843.708 - 10896.347: 45.5343% ( 77) 00:19:07.336 10896.347 - 10948.986: 46.2191% ( 71) 00:19:07.336 10948.986 - 11001.626: 46.6917% ( 49) 00:19:07.336 11001.626 - 11054.265: 47.1065% ( 43) 00:19:07.336 11054.265 - 11106.904: 47.6370% ( 55) 00:19:07.336 11106.904 - 11159.544: 48.3410% ( 73) 00:19:07.336 11159.544 - 11212.183: 48.7751% ( 45) 00:19:07.336 11212.183 - 11264.822: 49.3345% ( 58) 00:19:07.336 11264.822 - 11317.462: 49.9035% ( 59) 00:19:07.336 11317.462 - 11370.101: 50.4823% ( 60) 00:19:07.336 11370.101 - 11422.741: 51.3696% ( 92) 00:19:07.336 11422.741 - 11475.380: 51.9676% ( 62) 00:19:07.336 11475.380 - 11528.019: 52.9321% ( 100) 00:19:07.336 11528.019 - 11580.659: 54.1088% ( 122) 00:19:07.336 11580.659 - 11633.298: 54.9190% ( 84) 00:19:07.336 11633.298 - 11685.937: 55.6906% ( 80) 00:19:07.336 11685.937 - 11738.577: 56.1728% ( 50) 00:19:07.336 11738.577 - 11791.216: 56.5297% ( 37) 00:19:07.336 11791.216 - 11843.855: 56.8769% ( 36) 00:19:07.336 11843.855 - 11896.495: 57.1856% ( 32) 00:19:07.336 11896.495 - 11949.134: 57.5521% ( 38) 00:19:07.336 11949.134 - 12001.773: 58.1211% ( 59) 00:19:07.336 12001.773 - 12054.413: 58.5166% ( 41) 00:19:07.336 12054.413 - 12107.052: 58.9313% ( 43) 00:19:07.336 12107.052 - 12159.692: 59.5004% ( 59) 00:19:07.336 12159.692 - 12212.331: 60.0309% ( 55) 00:19:07.336 12212.331 - 12264.970: 60.5903% ( 58) 00:19:07.336 12264.970 - 12317.610: 61.1786% ( 61) 00:19:07.336 12317.610 - 12370.249: 61.8345% ( 68) 00:19:07.336 12370.249 - 12422.888: 62.2878% ( 47) 00:19:07.336 12422.888 - 12475.528: 62.8279% ( 56) 00:19:07.336 12475.528 - 12528.167: 63.2716% ( 46) 00:19:07.336 12528.167 - 12580.806: 63.9853% ( 74) 00:19:07.336 12580.806 - 12633.446: 64.3808% ( 41) 00:19:07.336 12633.446 - 12686.085: 64.8148% ( 45) 00:19:07.336 12686.085 - 12738.724: 65.2971% ( 50) 00:19:07.336 12738.724 - 12791.364: 65.6443% ( 36) 00:19:07.336 12791.364 - 12844.003: 66.1458% ( 52) 00:19:07.336 12844.003 - 12896.643: 66.5027% ( 37) 00:19:07.336 12896.643 - 12949.282: 66.8017% ( 31) 00:19:07.336 12949.282 - 13001.921: 67.1393% ( 35) 00:19:07.336 13001.921 - 13054.561: 67.4769% ( 35) 00:19:07.336 13054.561 - 13107.200: 67.7469% ( 28) 00:19:07.336 13107.200 - 13159.839: 68.1424% ( 41) 00:19:07.336 13159.839 - 13212.479: 68.5667% ( 44) 00:19:07.336 13212.479 - 13265.118: 68.7596% ( 20) 00:19:07.336 13265.118 - 13317.757: 68.9429% ( 19) 00:19:07.336 13317.757 - 13370.397: 69.1551% ( 22) 00:19:07.336 13370.397 - 13423.036: 69.4734% ( 33) 00:19:07.336 13423.036 - 13475.676: 69.7434% ( 28) 00:19:07.336 13475.676 - 13580.954: 70.3414% ( 62) 00:19:07.336 13580.954 - 13686.233: 71.0745% ( 76) 00:19:07.336 13686.233 - 13791.512: 71.5856% ( 53) 00:19:07.336 13791.512 - 13896.790: 72.0968% ( 53) 00:19:07.336 13896.790 - 14002.069: 72.7720% ( 70) 00:19:07.336 14002.069 - 14107.348: 73.3989% ( 65) 00:19:07.336 14107.348 - 14212.627: 74.1030% ( 73) 00:19:07.336 14212.627 - 14317.905: 75.0289% ( 96) 00:19:07.336 14317.905 - 14423.184: 75.9259% ( 93) 00:19:07.336 14423.184 - 14528.463: 76.9194% ( 103) 00:19:07.336 14528.463 - 14633.741: 77.9225% ( 104) 00:19:07.336 14633.741 - 14739.020: 78.5687% ( 67) 00:19:07.336 14739.020 - 14844.299: 79.2921% ( 75) 00:19:07.336 14844.299 - 14949.578: 79.9672% ( 70) 00:19:07.336 14949.578 - 15054.856: 80.6038% ( 66) 00:19:07.336 15054.856 - 15160.135: 81.3175% ( 74) 00:19:07.336 15160.135 - 15265.414: 82.1181% ( 83) 00:19:07.336 15265.414 - 15370.692: 82.9186% ( 83) 00:19:07.336 15370.692 - 15475.971: 83.7384% ( 85) 00:19:07.336 15475.971 - 15581.250: 84.5100% ( 80) 00:19:07.336 15581.250 - 15686.529: 85.2527% ( 77) 00:19:07.336 15686.529 - 15791.807: 85.9375% ( 71) 00:19:07.336 15791.807 - 15897.086: 86.6609% ( 75) 00:19:07.336 15897.086 - 16002.365: 87.0853% ( 44) 00:19:07.336 16002.365 - 16107.643: 87.5000% ( 43) 00:19:07.336 16107.643 - 16212.922: 87.9244% ( 44) 00:19:07.336 16212.922 - 16318.201: 88.4356% ( 53) 00:19:07.336 16318.201 - 16423.480: 89.0143% ( 60) 00:19:07.336 16423.480 - 16528.758: 89.4483% ( 45) 00:19:07.336 16528.758 - 16634.037: 89.9113% ( 48) 00:19:07.336 16634.037 - 16739.316: 90.3646% ( 47) 00:19:07.336 16739.316 - 16844.594: 90.9433% ( 60) 00:19:07.336 16844.594 - 16949.873: 91.4641% ( 54) 00:19:07.336 16949.873 - 17055.152: 92.1779% ( 74) 00:19:07.336 17055.152 - 17160.431: 92.8627% ( 71) 00:19:07.336 17160.431 - 17265.709: 93.5860% ( 75) 00:19:07.336 17265.709 - 17370.988: 94.2323% ( 67) 00:19:07.336 17370.988 - 17476.267: 94.7434% ( 53) 00:19:07.336 17476.267 - 17581.545: 95.1485% ( 42) 00:19:07.336 17581.545 - 17686.824: 95.3993% ( 26) 00:19:07.336 17686.824 - 17792.103: 95.6308% ( 24) 00:19:07.336 17792.103 - 17897.382: 96.0359% ( 42) 00:19:07.336 17897.382 - 18002.660: 96.3252% ( 30) 00:19:07.336 18002.660 - 18107.939: 96.5567% ( 24) 00:19:07.336 18107.939 - 18213.218: 96.6821% ( 13) 00:19:07.336 18213.218 - 18318.496: 96.7689% ( 9) 00:19:07.336 18318.496 - 18423.775: 96.9232% ( 16) 00:19:07.336 18423.775 - 18529.054: 97.0197% ( 10) 00:19:07.336 18529.054 - 18634.333: 97.0968% ( 8) 00:19:07.336 18634.333 - 18739.611: 97.1933% ( 10) 00:19:07.336 18739.611 - 18844.890: 97.3573% ( 17) 00:19:07.336 18844.890 - 18950.169: 97.4923% ( 14) 00:19:07.336 18950.169 - 19055.447: 97.6562% ( 17) 00:19:07.336 19055.447 - 19160.726: 97.7431% ( 9) 00:19:07.336 19160.726 - 19266.005: 97.7913% ( 5) 00:19:07.336 19266.005 - 19371.284: 97.8781% ( 9) 00:19:07.336 19371.284 - 19476.562: 97.9456% ( 7) 00:19:07.336 19476.562 - 19581.841: 98.0517% ( 11) 00:19:07.336 19581.841 - 19687.120: 98.1481% ( 10) 00:19:07.336 19687.120 - 19792.398: 98.2253% ( 8) 00:19:07.336 19792.398 - 19897.677: 98.3603% ( 14) 00:19:07.336 19897.677 - 20002.956: 98.5436% ( 19) 00:19:07.336 20002.956 - 20108.235: 98.6015% ( 6) 00:19:07.336 20108.235 - 20213.513: 98.6111% ( 1) 00:19:07.336 20213.513 - 20318.792: 98.6400% ( 3) 00:19:07.336 20634.628 - 20739.907: 98.6497% ( 1) 00:19:07.336 20739.907 - 20845.186: 98.6786% ( 3) 00:19:07.336 20845.186 - 20950.464: 98.7172% ( 4) 00:19:07.336 20950.464 - 21055.743: 98.7461% ( 3) 00:19:07.336 21055.743 - 21161.022: 98.7654% ( 2) 00:19:07.336 33689.189 - 33899.746: 98.8426% ( 8) 00:19:07.336 33899.746 - 34110.304: 98.9198% ( 8) 00:19:07.336 34110.304 - 34320.861: 98.9873% ( 7) 00:19:07.336 34320.861 - 34531.418: 99.0451% ( 6) 00:19:07.336 34531.418 - 34741.976: 99.1030% ( 6) 00:19:07.336 34741.976 - 34952.533: 99.1705% ( 7) 00:19:07.336 34952.533 - 35163.091: 99.2380% ( 7) 00:19:07.336 35163.091 - 35373.648: 99.3152% ( 8) 00:19:07.336 35373.648 - 35584.206: 99.3827% ( 7) 00:19:07.336 42111.486 - 42322.043: 99.4117% ( 3) 00:19:07.336 42322.043 - 42532.601: 99.4792% ( 7) 00:19:07.336 42532.601 - 42743.158: 99.5467% ( 7) 00:19:07.336 42743.158 - 42953.716: 99.6142% ( 7) 00:19:07.336 42953.716 - 43164.273: 99.6721% ( 6) 00:19:07.336 43164.273 - 43374.831: 99.7492% ( 8) 00:19:07.336 43374.831 - 43585.388: 99.8167% ( 7) 00:19:07.336 43585.388 - 43795.945: 99.8746% ( 6) 00:19:07.336 43795.945 - 44006.503: 99.9421% ( 7) 00:19:07.336 44006.503 - 44217.060: 100.0000% ( 6) 00:19:07.336 00:19:07.336 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:19:07.336 ============================================================================== 00:19:07.336 Range in us Cumulative IO count 00:19:07.336 7474.789 - 7527.428: 0.0193% ( 2) 00:19:07.336 7527.428 - 7580.067: 0.0482% ( 3) 00:19:07.336 7580.067 - 7632.707: 0.0868% ( 4) 00:19:07.336 7632.707 - 7685.346: 0.1157% ( 3) 00:19:07.336 7685.346 - 7737.986: 0.1929% ( 8) 00:19:07.336 7737.986 - 7790.625: 0.2990% ( 11) 00:19:07.336 7790.625 - 7843.264: 0.6076% ( 32) 00:19:07.336 7843.264 - 7895.904: 0.8102% ( 21) 00:19:07.336 7895.904 - 7948.543: 1.0610% ( 26) 00:19:07.336 7948.543 - 8001.182: 1.1381% ( 8) 00:19:07.336 8001.182 - 8053.822: 1.2153% ( 8) 00:19:07.336 8053.822 - 8106.461: 1.2924% ( 8) 00:19:07.336 8106.461 - 8159.100: 1.3600% ( 7) 00:19:07.336 8159.100 - 8211.740: 1.4660% ( 11) 00:19:07.336 8211.740 - 8264.379: 1.7265% ( 27) 00:19:07.336 8264.379 - 8317.018: 2.0448% ( 33) 00:19:07.336 8317.018 - 8369.658: 2.5849% ( 56) 00:19:07.336 8369.658 - 8422.297: 2.9225% ( 35) 00:19:07.336 8422.297 - 8474.937: 3.2600% ( 35) 00:19:07.336 8474.937 - 8527.576: 3.7809% ( 54) 00:19:07.336 8527.576 - 8580.215: 4.3403% ( 58) 00:19:07.336 8580.215 - 8632.855: 5.6424% ( 135) 00:19:07.336 8632.855 - 8685.494: 6.1921% ( 57) 00:19:07.336 8685.494 - 8738.133: 6.7998% ( 63) 00:19:07.336 8738.133 - 8790.773: 7.1952% ( 41) 00:19:07.336 8790.773 - 8843.412: 7.6292% ( 45) 00:19:07.336 8843.412 - 8896.051: 8.1501% ( 54) 00:19:07.336 8896.051 - 8948.691: 8.6034% ( 47) 00:19:07.336 8948.691 - 9001.330: 9.1628% ( 58) 00:19:07.336 9001.330 - 9053.969: 9.8380% ( 70) 00:19:07.336 9053.969 - 9106.609: 10.7446% ( 94) 00:19:07.336 9106.609 - 9159.248: 11.5934% ( 88) 00:19:07.336 9159.248 - 9211.888: 12.2492% ( 68) 00:19:07.336 9211.888 - 9264.527: 12.8858% ( 66) 00:19:07.336 9264.527 - 9317.166: 13.5224% ( 66) 00:19:07.336 9317.166 - 9369.806: 14.2554% ( 76) 00:19:07.336 9369.806 - 9422.445: 15.2103% ( 99) 00:19:07.336 9422.445 - 9475.084: 16.3966% ( 123) 00:19:07.336 9475.084 - 9527.724: 17.7566% ( 141) 00:19:07.336 9527.724 - 9580.363: 19.2708% ( 157) 00:19:07.336 9580.363 - 9633.002: 20.4765% ( 125) 00:19:07.336 9633.002 - 9685.642: 21.6435% ( 121) 00:19:07.336 9685.642 - 9738.281: 22.9263% ( 133) 00:19:07.336 9738.281 - 9790.920: 23.9680% ( 108) 00:19:07.337 9790.920 - 9843.560: 25.1254% ( 120) 00:19:07.337 9843.560 - 9896.199: 26.3889% ( 131) 00:19:07.337 9896.199 - 9948.839: 27.5559% ( 121) 00:19:07.337 9948.839 - 10001.478: 29.1281% ( 163) 00:19:07.337 10001.478 - 10054.117: 30.6617% ( 159) 00:19:07.337 10054.117 - 10106.757: 31.8383% ( 122) 00:19:07.337 10106.757 - 10159.396: 32.8704% ( 107) 00:19:07.337 10159.396 - 10212.035: 33.6323% ( 79) 00:19:07.337 10212.035 - 10264.675: 34.5004% ( 90) 00:19:07.337 10264.675 - 10317.314: 35.4552% ( 99) 00:19:07.337 10317.314 - 10369.953: 36.4680% ( 105) 00:19:07.337 10369.953 - 10422.593: 37.4807% ( 105) 00:19:07.337 10422.593 - 10475.232: 38.5802% ( 114) 00:19:07.337 10475.232 - 10527.871: 39.8148% ( 128) 00:19:07.337 10527.871 - 10580.511: 40.7215% ( 94) 00:19:07.337 10580.511 - 10633.150: 41.6667% ( 98) 00:19:07.337 10633.150 - 10685.790: 42.8241% ( 120) 00:19:07.337 10685.790 - 10738.429: 43.7307% ( 94) 00:19:07.337 10738.429 - 10791.068: 44.6373% ( 94) 00:19:07.337 10791.068 - 10843.708: 45.2450% ( 63) 00:19:07.337 10843.708 - 10896.347: 45.9394% ( 72) 00:19:07.337 10896.347 - 10948.986: 46.5085% ( 59) 00:19:07.337 10948.986 - 11001.626: 47.0583% ( 57) 00:19:07.337 11001.626 - 11054.265: 47.4441% ( 40) 00:19:07.337 11054.265 - 11106.904: 47.8877% ( 46) 00:19:07.337 11106.904 - 11159.544: 48.5436% ( 68) 00:19:07.337 11159.544 - 11212.183: 49.1030% ( 58) 00:19:07.337 11212.183 - 11264.822: 49.8746% ( 80) 00:19:07.337 11264.822 - 11317.462: 50.5305% ( 68) 00:19:07.337 11317.462 - 11370.101: 50.9742% ( 46) 00:19:07.337 11370.101 - 11422.741: 51.5914% ( 64) 00:19:07.337 11422.741 - 11475.380: 52.3052% ( 74) 00:19:07.337 11475.380 - 11528.019: 52.9032% ( 62) 00:19:07.337 11528.019 - 11580.659: 53.5301% ( 65) 00:19:07.337 11580.659 - 11633.298: 54.1667% ( 66) 00:19:07.337 11633.298 - 11685.937: 54.8804% ( 74) 00:19:07.337 11685.937 - 11738.577: 55.4591% ( 60) 00:19:07.337 11738.577 - 11791.216: 56.1053% ( 67) 00:19:07.337 11791.216 - 11843.855: 56.9444% ( 87) 00:19:07.337 11843.855 - 11896.495: 57.6678% ( 75) 00:19:07.337 11896.495 - 11949.134: 58.3816% ( 74) 00:19:07.337 11949.134 - 12001.773: 59.0856% ( 73) 00:19:07.337 12001.773 - 12054.413: 59.6258% ( 56) 00:19:07.337 12054.413 - 12107.052: 60.4263% ( 83) 00:19:07.337 12107.052 - 12159.692: 61.2944% ( 90) 00:19:07.337 12159.692 - 12212.331: 61.9792% ( 71) 00:19:07.337 12212.331 - 12264.970: 62.5386% ( 58) 00:19:07.337 12264.970 - 12317.610: 63.0401% ( 52) 00:19:07.337 12317.610 - 12370.249: 63.4452% ( 42) 00:19:07.337 12370.249 - 12422.888: 63.8214% ( 39) 00:19:07.337 12422.888 - 12475.528: 64.0721% ( 26) 00:19:07.337 12475.528 - 12528.167: 64.3326% ( 27) 00:19:07.337 12528.167 - 12580.806: 64.5544% ( 23) 00:19:07.337 12580.806 - 12633.446: 64.8438% ( 30) 00:19:07.337 12633.446 - 12686.085: 65.1427% ( 31) 00:19:07.337 12686.085 - 12738.724: 65.4417% ( 31) 00:19:07.337 12738.724 - 12791.364: 65.7890% ( 36) 00:19:07.337 12791.364 - 12844.003: 66.1265% ( 35) 00:19:07.337 12844.003 - 12896.643: 66.4834% ( 37) 00:19:07.337 12896.643 - 12949.282: 67.0621% ( 60) 00:19:07.337 12949.282 - 13001.921: 67.4769% ( 43) 00:19:07.337 13001.921 - 13054.561: 67.8241% ( 36) 00:19:07.337 13054.561 - 13107.200: 68.1520% ( 34) 00:19:07.337 13107.200 - 13159.839: 68.5185% ( 38) 00:19:07.337 13159.839 - 13212.479: 68.8175% ( 31) 00:19:07.337 13212.479 - 13265.118: 69.0972% ( 29) 00:19:07.337 13265.118 - 13317.757: 69.4059% ( 32) 00:19:07.337 13317.757 - 13370.397: 69.7434% ( 35) 00:19:07.337 13370.397 - 13423.036: 69.9749% ( 24) 00:19:07.337 13423.036 - 13475.676: 70.1485% ( 18) 00:19:07.337 13475.676 - 13580.954: 70.5826% ( 45) 00:19:07.337 13580.954 - 13686.233: 70.9491% ( 38) 00:19:07.337 13686.233 - 13791.512: 71.4120% ( 48) 00:19:07.337 13791.512 - 13896.790: 71.8075% ( 41) 00:19:07.337 13896.790 - 14002.069: 72.2704% ( 48) 00:19:07.337 14002.069 - 14107.348: 72.7527% ( 50) 00:19:07.337 14107.348 - 14212.627: 73.4761% ( 75) 00:19:07.337 14212.627 - 14317.905: 74.1995% ( 75) 00:19:07.337 14317.905 - 14423.184: 74.9228% ( 75) 00:19:07.337 14423.184 - 14528.463: 75.8488% ( 96) 00:19:07.337 14528.463 - 14633.741: 76.9194% ( 111) 00:19:07.337 14633.741 - 14739.020: 78.3083% ( 144) 00:19:07.337 14739.020 - 14844.299: 79.5621% ( 130) 00:19:07.337 14844.299 - 14949.578: 80.5459% ( 102) 00:19:07.337 14949.578 - 15054.856: 81.3754% ( 86) 00:19:07.337 15054.856 - 15160.135: 82.1181% ( 77) 00:19:07.337 15160.135 - 15265.414: 82.9090% ( 82) 00:19:07.337 15265.414 - 15370.692: 83.4780% ( 59) 00:19:07.337 15370.692 - 15475.971: 83.9217% ( 46) 00:19:07.337 15475.971 - 15581.250: 84.3075% ( 40) 00:19:07.337 15581.250 - 15686.529: 84.7801% ( 49) 00:19:07.337 15686.529 - 15791.807: 85.3106% ( 55) 00:19:07.337 15791.807 - 15897.086: 85.9857% ( 70) 00:19:07.337 15897.086 - 16002.365: 86.9020% ( 95) 00:19:07.337 16002.365 - 16107.643: 87.8279% ( 96) 00:19:07.337 16107.643 - 16212.922: 88.3295% ( 52) 00:19:07.337 16212.922 - 16318.201: 88.6767% ( 36) 00:19:07.337 16318.201 - 16423.480: 89.1879% ( 53) 00:19:07.337 16423.480 - 16528.758: 89.5544% ( 38) 00:19:07.337 16528.758 - 16634.037: 89.9788% ( 44) 00:19:07.337 16634.037 - 16739.316: 90.3453% ( 38) 00:19:07.337 16739.316 - 16844.594: 90.6636% ( 33) 00:19:07.337 16844.594 - 16949.873: 91.0783% ( 43) 00:19:07.337 16949.873 - 17055.152: 91.5220% ( 46) 00:19:07.337 17055.152 - 17160.431: 92.0525% ( 55) 00:19:07.337 17160.431 - 17265.709: 92.5347% ( 50) 00:19:07.337 17265.709 - 17370.988: 93.2677% ( 76) 00:19:07.337 17370.988 - 17476.267: 93.7114% ( 46) 00:19:07.337 17476.267 - 17581.545: 93.9718% ( 27) 00:19:07.337 17581.545 - 17686.824: 94.2901% ( 33) 00:19:07.337 17686.824 - 17792.103: 94.6470% ( 37) 00:19:07.337 17792.103 - 17897.382: 95.0907% ( 46) 00:19:07.337 17897.382 - 18002.660: 95.5922% ( 52) 00:19:07.337 18002.660 - 18107.939: 96.0069% ( 43) 00:19:07.337 18107.939 - 18213.218: 96.3059% ( 31) 00:19:07.337 18213.218 - 18318.496: 96.7207% ( 43) 00:19:07.337 18318.496 - 18423.775: 96.9811% ( 27) 00:19:07.337 18423.775 - 18529.054: 97.1258% ( 15) 00:19:07.337 18529.054 - 18634.333: 97.2801% ( 16) 00:19:07.337 18634.333 - 18739.611: 97.4826% ( 21) 00:19:07.337 18739.611 - 18844.890: 97.5887% ( 11) 00:19:07.337 18844.890 - 18950.169: 97.6755% ( 9) 00:19:07.337 18950.169 - 19055.447: 97.7527% ( 8) 00:19:07.337 19055.447 - 19160.726: 97.8202% ( 7) 00:19:07.337 19160.726 - 19266.005: 97.9649% ( 15) 00:19:07.337 19266.005 - 19371.284: 98.1096% ( 15) 00:19:07.337 19371.284 - 19476.562: 98.2157% ( 11) 00:19:07.337 19476.562 - 19581.841: 98.3218% ( 11) 00:19:07.337 19581.841 - 19687.120: 98.3410% ( 2) 00:19:07.337 19687.120 - 19792.398: 98.3796% ( 4) 00:19:07.337 19792.398 - 19897.677: 98.4086% ( 3) 00:19:07.337 19897.677 - 20002.956: 98.4375% ( 3) 00:19:07.337 20002.956 - 20108.235: 98.4664% ( 3) 00:19:07.337 20108.235 - 20213.513: 98.4954% ( 3) 00:19:07.337 20213.513 - 20318.792: 98.5340% ( 4) 00:19:07.337 20318.792 - 20424.071: 98.5725% ( 4) 00:19:07.337 20424.071 - 20529.349: 98.6015% ( 3) 00:19:07.337 20529.349 - 20634.628: 98.6400% ( 4) 00:19:07.337 20634.628 - 20739.907: 98.6786% ( 4) 00:19:07.337 20739.907 - 20845.186: 98.7172% ( 4) 00:19:07.337 20845.186 - 20950.464: 98.7558% ( 4) 00:19:07.337 20950.464 - 21055.743: 98.7654% ( 1) 00:19:07.337 31794.172 - 32004.729: 98.7847% ( 2) 00:19:07.337 32004.729 - 32215.287: 98.8619% ( 8) 00:19:07.337 32215.287 - 32425.844: 98.9294% ( 7) 00:19:07.337 32425.844 - 32636.402: 99.0066% ( 8) 00:19:07.337 32636.402 - 32846.959: 99.0837% ( 8) 00:19:07.337 32846.959 - 33057.516: 99.1609% ( 8) 00:19:07.337 33057.516 - 33268.074: 99.2380% ( 8) 00:19:07.337 33268.074 - 33478.631: 99.3152% ( 8) 00:19:07.337 33478.631 - 33689.189: 99.3827% ( 7) 00:19:07.337 40848.141 - 41058.699: 99.4502% ( 7) 00:19:07.337 41058.699 - 41269.256: 99.5274% ( 8) 00:19:07.337 41269.256 - 41479.814: 99.5949% ( 7) 00:19:07.337 41479.814 - 41690.371: 99.6721% ( 8) 00:19:07.337 41690.371 - 41900.929: 99.7396% ( 7) 00:19:07.337 41900.929 - 42111.486: 99.8167% ( 8) 00:19:07.337 42111.486 - 42322.043: 99.8843% ( 7) 00:19:07.337 42322.043 - 42532.601: 99.9614% ( 8) 00:19:07.337 42532.601 - 42743.158: 100.0000% ( 4) 00:19:07.337 00:19:07.337 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:19:07.337 ============================================================================== 00:19:07.337 Range in us Cumulative IO count 00:19:07.337 7580.067 - 7632.707: 0.0386% ( 4) 00:19:07.337 7632.707 - 7685.346: 0.1157% ( 8) 00:19:07.337 7685.346 - 7737.986: 0.1929% ( 8) 00:19:07.337 7737.986 - 7790.625: 0.4051% ( 22) 00:19:07.337 7790.625 - 7843.264: 0.4919% ( 9) 00:19:07.337 7843.264 - 7895.904: 0.5594% ( 7) 00:19:07.337 7895.904 - 7948.543: 0.5883% ( 3) 00:19:07.337 7948.543 - 8001.182: 0.6076% ( 2) 00:19:07.337 8001.182 - 8053.822: 0.6559% ( 5) 00:19:07.337 8053.822 - 8106.461: 0.7620% ( 11) 00:19:07.337 8106.461 - 8159.100: 0.9163% ( 16) 00:19:07.337 8159.100 - 8211.740: 1.2924% ( 39) 00:19:07.337 8211.740 - 8264.379: 1.6011% ( 32) 00:19:07.337 8264.379 - 8317.018: 2.0351% ( 45) 00:19:07.337 8317.018 - 8369.658: 2.5367% ( 52) 00:19:07.337 8369.658 - 8422.297: 2.9803% ( 46) 00:19:07.337 8422.297 - 8474.937: 3.5108% ( 55) 00:19:07.337 8474.937 - 8527.576: 3.9834% ( 49) 00:19:07.337 8527.576 - 8580.215: 4.7261% ( 77) 00:19:07.337 8580.215 - 8632.855: 5.2951% ( 59) 00:19:07.337 8632.855 - 8685.494: 5.7967% ( 52) 00:19:07.337 8685.494 - 8738.133: 6.3754% ( 60) 00:19:07.337 8738.133 - 8790.773: 7.1277% ( 78) 00:19:07.337 8790.773 - 8843.412: 7.8607% ( 76) 00:19:07.337 8843.412 - 8896.051: 8.5359% ( 70) 00:19:07.337 8896.051 - 8948.691: 9.3075% ( 80) 00:19:07.337 8948.691 - 9001.330: 10.0598% ( 78) 00:19:07.337 9001.330 - 9053.969: 10.8218% ( 79) 00:19:07.337 9053.969 - 9106.609: 11.5162% ( 72) 00:19:07.337 9106.609 - 9159.248: 12.5096% ( 103) 00:19:07.337 9159.248 - 9211.888: 13.3777% ( 90) 00:19:07.337 9211.888 - 9264.527: 14.1590% ( 81) 00:19:07.337 9264.527 - 9317.166: 14.8534% ( 72) 00:19:07.337 9317.166 - 9369.806: 15.5478% ( 72) 00:19:07.337 9369.806 - 9422.445: 16.1748% ( 65) 00:19:07.337 9422.445 - 9475.084: 17.0139% ( 87) 00:19:07.337 9475.084 - 9527.724: 17.7276% ( 74) 00:19:07.338 9527.724 - 9580.363: 18.5957% ( 90) 00:19:07.338 9580.363 - 9633.002: 19.3383% ( 77) 00:19:07.338 9633.002 - 9685.642: 20.1003% ( 79) 00:19:07.338 9685.642 - 9738.281: 20.9877% ( 92) 00:19:07.338 9738.281 - 9790.920: 22.1451% ( 120) 00:19:07.338 9790.920 - 9843.560: 23.0903% ( 98) 00:19:07.338 9843.560 - 9896.199: 24.6046% ( 157) 00:19:07.338 9896.199 - 9948.839: 26.0224% ( 147) 00:19:07.338 9948.839 - 10001.478: 27.3052% ( 133) 00:19:07.338 10001.478 - 10054.117: 28.4722% ( 121) 00:19:07.338 10054.117 - 10106.757: 29.6393% ( 121) 00:19:07.338 10106.757 - 10159.396: 31.0764% ( 149) 00:19:07.338 10159.396 - 10212.035: 32.3302% ( 130) 00:19:07.338 10212.035 - 10264.675: 33.3430% ( 105) 00:19:07.338 10264.675 - 10317.314: 34.2785% ( 97) 00:19:07.338 10317.314 - 10369.953: 35.0984% ( 85) 00:19:07.338 10369.953 - 10422.593: 36.0147% ( 95) 00:19:07.338 10422.593 - 10475.232: 37.2492% ( 128) 00:19:07.338 10475.232 - 10527.871: 38.3295% ( 112) 00:19:07.338 10527.871 - 10580.511: 39.2650% ( 97) 00:19:07.338 10580.511 - 10633.150: 40.2199% ( 99) 00:19:07.338 10633.150 - 10685.790: 41.3098% ( 113) 00:19:07.338 10685.790 - 10738.429: 42.3997% ( 113) 00:19:07.338 10738.429 - 10791.068: 43.7886% ( 144) 00:19:07.338 10791.068 - 10843.708: 44.5602% ( 80) 00:19:07.338 10843.708 - 10896.347: 45.5729% ( 105) 00:19:07.338 10896.347 - 10948.986: 46.2095% ( 66) 00:19:07.338 10948.986 - 11001.626: 46.9329% ( 75) 00:19:07.338 11001.626 - 11054.265: 47.4537% ( 54) 00:19:07.338 11054.265 - 11106.904: 47.7913% ( 35) 00:19:07.338 11106.904 - 11159.544: 48.2639% ( 49) 00:19:07.338 11159.544 - 11212.183: 48.6497% ( 40) 00:19:07.338 11212.183 - 11264.822: 49.2573% ( 63) 00:19:07.338 11264.822 - 11317.462: 49.7685% ( 53) 00:19:07.338 11317.462 - 11370.101: 50.4823% ( 74) 00:19:07.338 11370.101 - 11422.741: 51.1285% ( 67) 00:19:07.338 11422.741 - 11475.380: 51.8615% ( 76) 00:19:07.338 11475.380 - 11528.019: 52.6427% ( 81) 00:19:07.338 11528.019 - 11580.659: 53.4915% ( 88) 00:19:07.338 11580.659 - 11633.298: 54.1377% ( 67) 00:19:07.338 11633.298 - 11685.937: 54.7647% ( 65) 00:19:07.338 11685.937 - 11738.577: 55.3241% ( 58) 00:19:07.338 11738.577 - 11791.216: 56.0957% ( 80) 00:19:07.338 11791.216 - 11843.855: 56.7033% ( 63) 00:19:07.338 11843.855 - 11896.495: 57.2820% ( 60) 00:19:07.338 11896.495 - 11949.134: 57.9668% ( 71) 00:19:07.338 11949.134 - 12001.773: 58.4684% ( 52) 00:19:07.338 12001.773 - 12054.413: 58.9313% ( 48) 00:19:07.338 12054.413 - 12107.052: 59.4907% ( 58) 00:19:07.338 12107.052 - 12159.692: 60.1177% ( 65) 00:19:07.338 12159.692 - 12212.331: 60.8025% ( 71) 00:19:07.338 12212.331 - 12264.970: 61.5162% ( 74) 00:19:07.338 12264.970 - 12317.610: 62.2106% ( 72) 00:19:07.338 12317.610 - 12370.249: 62.8279% ( 64) 00:19:07.338 12370.249 - 12422.888: 63.3777% ( 57) 00:19:07.338 12422.888 - 12475.528: 63.8600% ( 50) 00:19:07.338 12475.528 - 12528.167: 64.4579% ( 62) 00:19:07.338 12528.167 - 12580.806: 64.8920% ( 45) 00:19:07.338 12580.806 - 12633.446: 65.4128% ( 54) 00:19:07.338 12633.446 - 12686.085: 66.0108% ( 62) 00:19:07.338 12686.085 - 12738.724: 66.4834% ( 49) 00:19:07.338 12738.724 - 12791.364: 66.8017% ( 33) 00:19:07.338 12791.364 - 12844.003: 67.2261% ( 44) 00:19:07.338 12844.003 - 12896.643: 67.5058% ( 29) 00:19:07.338 12896.643 - 12949.282: 67.8241% ( 33) 00:19:07.338 12949.282 - 13001.921: 68.1134% ( 30) 00:19:07.338 13001.921 - 13054.561: 68.5282% ( 43) 00:19:07.338 13054.561 - 13107.200: 68.8079% ( 29) 00:19:07.338 13107.200 - 13159.839: 69.1647% ( 37) 00:19:07.338 13159.839 - 13212.479: 69.5216% ( 37) 00:19:07.338 13212.479 - 13265.118: 69.8495% ( 34) 00:19:07.338 13265.118 - 13317.757: 70.0907% ( 25) 00:19:07.338 13317.757 - 13370.397: 70.3221% ( 24) 00:19:07.338 13370.397 - 13423.036: 70.5633% ( 25) 00:19:07.338 13423.036 - 13475.676: 70.8526% ( 30) 00:19:07.338 13475.676 - 13580.954: 71.7110% ( 89) 00:19:07.338 13580.954 - 13686.233: 72.3862% ( 70) 00:19:07.338 13686.233 - 13791.512: 72.9649% ( 60) 00:19:07.338 13791.512 - 13896.790: 73.5050% ( 56) 00:19:07.338 13896.790 - 14002.069: 74.2959% ( 82) 00:19:07.338 14002.069 - 14107.348: 74.9904% ( 72) 00:19:07.338 14107.348 - 14212.627: 75.6559% ( 69) 00:19:07.338 14212.627 - 14317.905: 76.3214% ( 69) 00:19:07.338 14317.905 - 14423.184: 77.0737% ( 78) 00:19:07.338 14423.184 - 14528.463: 77.8453% ( 80) 00:19:07.338 14528.463 - 14633.741: 78.5494% ( 73) 00:19:07.338 14633.741 - 14739.020: 79.2631% ( 74) 00:19:07.338 14739.020 - 14844.299: 79.8900% ( 65) 00:19:07.338 14844.299 - 14949.578: 80.3434% ( 47) 00:19:07.338 14949.578 - 15054.856: 80.7677% ( 44) 00:19:07.338 15054.856 - 15160.135: 81.3947% ( 65) 00:19:07.338 15160.135 - 15265.414: 82.3399% ( 98) 00:19:07.338 15265.414 - 15370.692: 83.2851% ( 98) 00:19:07.338 15370.692 - 15475.971: 83.9506% ( 69) 00:19:07.338 15475.971 - 15581.250: 84.5004% ( 57) 00:19:07.338 15581.250 - 15686.529: 84.8765% ( 39) 00:19:07.338 15686.529 - 15791.807: 85.3202% ( 46) 00:19:07.338 15791.807 - 15897.086: 85.9471% ( 65) 00:19:07.338 15897.086 - 16002.365: 86.6898% ( 77) 00:19:07.338 16002.365 - 16107.643: 87.3071% ( 64) 00:19:07.338 16107.643 - 16212.922: 87.6447% ( 35) 00:19:07.338 16212.922 - 16318.201: 87.9630% ( 33) 00:19:07.338 16318.201 - 16423.480: 88.6092% ( 67) 00:19:07.338 16423.480 - 16528.758: 89.1879% ( 60) 00:19:07.338 16528.758 - 16634.037: 89.6798% ( 51) 00:19:07.338 16634.037 - 16739.316: 90.5189% ( 87) 00:19:07.338 16739.316 - 16844.594: 91.1555% ( 66) 00:19:07.338 16844.594 - 16949.873: 91.7921% ( 66) 00:19:07.338 16949.873 - 17055.152: 92.3997% ( 63) 00:19:07.338 17055.152 - 17160.431: 92.8337% ( 45) 00:19:07.338 17160.431 - 17265.709: 93.2774% ( 46) 00:19:07.338 17265.709 - 17370.988: 93.7693% ( 51) 00:19:07.338 17370.988 - 17476.267: 94.2612% ( 51) 00:19:07.338 17476.267 - 17581.545: 94.6566% ( 41) 00:19:07.338 17581.545 - 17686.824: 94.9556% ( 31) 00:19:07.338 17686.824 - 17792.103: 95.2160% ( 27) 00:19:07.338 17792.103 - 17897.382: 95.5729% ( 37) 00:19:07.338 17897.382 - 18002.660: 95.8044% ( 24) 00:19:07.338 18002.660 - 18107.939: 95.9684% ( 17) 00:19:07.338 18107.939 - 18213.218: 96.1998% ( 24) 00:19:07.338 18213.218 - 18318.496: 96.3927% ( 20) 00:19:07.338 18318.496 - 18423.775: 96.6339% ( 25) 00:19:07.338 18423.775 - 18529.054: 96.7882% ( 16) 00:19:07.338 18529.054 - 18634.333: 96.9329% ( 15) 00:19:07.338 18634.333 - 18739.611: 97.2801% ( 36) 00:19:07.338 18739.611 - 18844.890: 97.5212% ( 25) 00:19:07.338 18844.890 - 18950.169: 97.9649% ( 46) 00:19:07.338 18950.169 - 19055.447: 98.1481% ( 19) 00:19:07.338 19055.447 - 19160.726: 98.2928% ( 15) 00:19:07.338 19160.726 - 19266.005: 98.3893% ( 10) 00:19:07.338 19266.005 - 19371.284: 98.4568% ( 7) 00:19:07.338 19371.284 - 19476.562: 98.5243% ( 7) 00:19:07.338 19476.562 - 19581.841: 98.5822% ( 6) 00:19:07.338 19581.841 - 19687.120: 98.6304% ( 5) 00:19:07.338 19687.120 - 19792.398: 98.6690% ( 4) 00:19:07.338 19792.398 - 19897.677: 98.7076% ( 4) 00:19:07.338 19897.677 - 20002.956: 98.7365% ( 3) 00:19:07.338 20002.956 - 20108.235: 98.7654% ( 3) 00:19:07.338 31373.057 - 31583.614: 98.8137% ( 5) 00:19:07.338 31583.614 - 31794.172: 98.8812% ( 7) 00:19:07.338 31794.172 - 32004.729: 98.9487% ( 7) 00:19:07.338 32004.729 - 32215.287: 99.0258% ( 8) 00:19:07.338 32215.287 - 32425.844: 99.0934% ( 7) 00:19:07.338 32425.844 - 32636.402: 99.1705% ( 8) 00:19:07.338 32636.402 - 32846.959: 99.2477% ( 8) 00:19:07.338 32846.959 - 33057.516: 99.3152% ( 7) 00:19:07.338 33057.516 - 33268.074: 99.3827% ( 7) 00:19:07.338 39795.354 - 40005.912: 99.3924% ( 1) 00:19:07.338 40005.912 - 40216.469: 99.4599% ( 7) 00:19:07.338 40216.469 - 40427.027: 99.5274% ( 7) 00:19:07.338 40427.027 - 40637.584: 99.5949% ( 7) 00:19:07.338 40637.584 - 40848.141: 99.6624% ( 7) 00:19:07.338 40848.141 - 41058.699: 99.7396% ( 8) 00:19:07.338 41058.699 - 41269.256: 99.8071% ( 7) 00:19:07.338 41269.256 - 41479.814: 99.8843% ( 8) 00:19:07.338 41479.814 - 41690.371: 99.9614% ( 8) 00:19:07.338 41690.371 - 41900.929: 100.0000% ( 4) 00:19:07.338 00:19:07.338 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:19:07.338 ============================================================================== 00:19:07.338 Range in us Cumulative IO count 00:19:07.338 7632.707 - 7685.346: 0.0096% ( 1) 00:19:07.338 7685.346 - 7737.986: 0.0579% ( 5) 00:19:07.338 7737.986 - 7790.625: 0.1350% ( 8) 00:19:07.338 7790.625 - 7843.264: 0.2604% ( 13) 00:19:07.338 7843.264 - 7895.904: 0.4726% ( 22) 00:19:07.338 7895.904 - 7948.543: 0.6076% ( 14) 00:19:07.338 7948.543 - 8001.182: 0.7427% ( 14) 00:19:07.338 8001.182 - 8053.822: 0.9356% ( 20) 00:19:07.338 8053.822 - 8106.461: 1.1285% ( 20) 00:19:07.338 8106.461 - 8159.100: 1.5046% ( 39) 00:19:07.338 8159.100 - 8211.740: 1.7554% ( 26) 00:19:07.338 8211.740 - 8264.379: 2.0544% ( 31) 00:19:07.338 8264.379 - 8317.018: 2.3438% ( 30) 00:19:07.338 8317.018 - 8369.658: 2.7006% ( 37) 00:19:07.338 8369.658 - 8422.297: 2.9321% ( 24) 00:19:07.338 8422.297 - 8474.937: 3.2504% ( 33) 00:19:07.338 8474.937 - 8527.576: 3.7133% ( 48) 00:19:07.338 8527.576 - 8580.215: 4.1763% ( 48) 00:19:07.338 8580.215 - 8632.855: 4.7261% ( 57) 00:19:07.338 8632.855 - 8685.494: 5.1987% ( 49) 00:19:07.338 8685.494 - 8738.133: 5.9124% ( 74) 00:19:07.338 8738.133 - 8790.773: 6.5394% ( 65) 00:19:07.338 8790.773 - 8843.412: 7.3592% ( 85) 00:19:07.338 8843.412 - 8896.051: 8.2465% ( 92) 00:19:07.338 8896.051 - 8948.691: 9.2014% ( 99) 00:19:07.338 8948.691 - 9001.330: 10.0019% ( 83) 00:19:07.338 9001.330 - 9053.969: 11.1883% ( 123) 00:19:07.338 9053.969 - 9106.609: 11.9213% ( 76) 00:19:07.338 9106.609 - 9159.248: 12.4421% ( 54) 00:19:07.338 9159.248 - 9211.888: 13.0787% ( 66) 00:19:07.338 9211.888 - 9264.527: 13.7635% ( 71) 00:19:07.338 9264.527 - 9317.166: 14.4387% ( 70) 00:19:07.338 9317.166 - 9369.806: 15.3453% ( 94) 00:19:07.338 9369.806 - 9422.445: 16.1844% ( 87) 00:19:07.338 9422.445 - 9475.084: 16.8789% ( 72) 00:19:07.338 9475.084 - 9527.724: 17.7855% ( 94) 00:19:07.338 9527.724 - 9580.363: 18.6246% ( 87) 00:19:07.338 9580.363 - 9633.002: 19.5988% ( 101) 00:19:07.338 9633.002 - 9685.642: 20.7562% ( 120) 00:19:07.338 9685.642 - 9738.281: 22.1065% ( 140) 00:19:07.339 9738.281 - 9790.920: 23.4954% ( 144) 00:19:07.339 9790.920 - 9843.560: 24.6142% ( 116) 00:19:07.339 9843.560 - 9896.199: 25.8005% ( 123) 00:19:07.339 9896.199 - 9948.839: 26.8808% ( 112) 00:19:07.339 9948.839 - 10001.478: 28.0768% ( 124) 00:19:07.339 10001.478 - 10054.117: 29.1956% ( 116) 00:19:07.339 10054.117 - 10106.757: 30.3723% ( 122) 00:19:07.339 10106.757 - 10159.396: 31.5490% ( 122) 00:19:07.339 10159.396 - 10212.035: 32.6389% ( 113) 00:19:07.339 10212.035 - 10264.675: 33.5359% ( 93) 00:19:07.339 10264.675 - 10317.314: 34.4811% ( 98) 00:19:07.339 10317.314 - 10369.953: 35.3492% ( 90) 00:19:07.339 10369.953 - 10422.593: 36.3040% ( 99) 00:19:07.339 10422.593 - 10475.232: 37.2685% ( 100) 00:19:07.339 10475.232 - 10527.871: 38.1655% ( 93) 00:19:07.339 10527.871 - 10580.511: 38.9757% ( 84) 00:19:07.339 10580.511 - 10633.150: 40.2778% ( 135) 00:19:07.339 10633.150 - 10685.790: 41.4062% ( 117) 00:19:07.339 10685.790 - 10738.429: 42.1586% ( 78) 00:19:07.339 10738.429 - 10791.068: 43.0170% ( 89) 00:19:07.339 10791.068 - 10843.708: 44.1262% ( 115) 00:19:07.339 10843.708 - 10896.347: 44.8978% ( 80) 00:19:07.339 10896.347 - 10948.986: 45.7755% ( 91) 00:19:07.339 10948.986 - 11001.626: 46.7978% ( 106) 00:19:07.339 11001.626 - 11054.265: 47.4826% ( 71) 00:19:07.339 11054.265 - 11106.904: 48.1867% ( 73) 00:19:07.339 11106.904 - 11159.544: 48.9294% ( 77) 00:19:07.339 11159.544 - 11212.183: 49.3441% ( 43) 00:19:07.339 11212.183 - 11264.822: 49.7782% ( 45) 00:19:07.339 11264.822 - 11317.462: 50.4147% ( 66) 00:19:07.339 11317.462 - 11370.101: 51.0320% ( 64) 00:19:07.339 11370.101 - 11422.741: 51.7072% ( 70) 00:19:07.339 11422.741 - 11475.380: 52.2859% ( 60) 00:19:07.339 11475.380 - 11528.019: 52.9514% ( 69) 00:19:07.339 11528.019 - 11580.659: 53.6844% ( 76) 00:19:07.339 11580.659 - 11633.298: 54.5235% ( 87) 00:19:07.339 11633.298 - 11685.937: 55.4302% ( 94) 00:19:07.339 11685.937 - 11738.577: 56.1150% ( 71) 00:19:07.339 11738.577 - 11791.216: 56.6165% ( 52) 00:19:07.339 11791.216 - 11843.855: 57.2434% ( 65) 00:19:07.339 11843.855 - 11896.495: 57.7739% ( 55) 00:19:07.339 11896.495 - 11949.134: 58.3044% ( 55) 00:19:07.339 11949.134 - 12001.773: 58.6806% ( 39) 00:19:07.339 12001.773 - 12054.413: 58.9024% ( 23) 00:19:07.339 12054.413 - 12107.052: 59.1435% ( 25) 00:19:07.339 12107.052 - 12159.692: 59.4425% ( 31) 00:19:07.339 12159.692 - 12212.331: 59.7512% ( 32) 00:19:07.339 12212.331 - 12264.970: 60.1080% ( 37) 00:19:07.339 12264.970 - 12317.610: 60.4842% ( 39) 00:19:07.339 12317.610 - 12370.249: 60.9471% ( 48) 00:19:07.339 12370.249 - 12422.888: 61.4583% ( 53) 00:19:07.339 12422.888 - 12475.528: 62.1528% ( 72) 00:19:07.339 12475.528 - 12528.167: 62.7701% ( 64) 00:19:07.339 12528.167 - 12580.806: 63.3873% ( 64) 00:19:07.339 12580.806 - 12633.446: 64.0143% ( 65) 00:19:07.339 12633.446 - 12686.085: 64.5158% ( 52) 00:19:07.339 12686.085 - 12738.724: 64.9016% ( 40) 00:19:07.339 12738.724 - 12791.364: 65.3260% ( 44) 00:19:07.339 12791.364 - 12844.003: 65.7215% ( 41) 00:19:07.339 12844.003 - 12896.643: 66.1458% ( 44) 00:19:07.339 12896.643 - 12949.282: 66.7824% ( 66) 00:19:07.339 12949.282 - 13001.921: 67.2647% ( 50) 00:19:07.339 13001.921 - 13054.561: 67.8530% ( 61) 00:19:07.339 13054.561 - 13107.200: 68.5667% ( 74) 00:19:07.339 13107.200 - 13159.839: 69.2226% ( 68) 00:19:07.339 13159.839 - 13212.479: 69.7434% ( 54) 00:19:07.339 13212.479 - 13265.118: 70.1678% ( 44) 00:19:07.339 13265.118 - 13317.757: 70.5922% ( 44) 00:19:07.339 13317.757 - 13370.397: 71.0938% ( 52) 00:19:07.339 13370.397 - 13423.036: 71.5856% ( 51) 00:19:07.339 13423.036 - 13475.676: 72.0872% ( 52) 00:19:07.339 13475.676 - 13580.954: 72.9938% ( 94) 00:19:07.339 13580.954 - 13686.233: 73.6786% ( 71) 00:19:07.339 13686.233 - 13791.512: 74.3731% ( 72) 00:19:07.339 13791.512 - 13896.790: 74.8457% ( 49) 00:19:07.339 13896.790 - 14002.069: 75.2797% ( 45) 00:19:07.339 14002.069 - 14107.348: 75.8584% ( 60) 00:19:07.339 14107.348 - 14212.627: 76.4660% ( 63) 00:19:07.339 14212.627 - 14317.905: 77.0062% ( 56) 00:19:07.339 14317.905 - 14423.184: 77.5752% ( 59) 00:19:07.339 14423.184 - 14528.463: 78.0189% ( 46) 00:19:07.339 14528.463 - 14633.741: 78.4819% ( 48) 00:19:07.339 14633.741 - 14739.020: 79.2728% ( 82) 00:19:07.339 14739.020 - 14844.299: 79.8322% ( 58) 00:19:07.339 14844.299 - 14949.578: 80.3144% ( 50) 00:19:07.339 14949.578 - 15054.856: 80.7967% ( 50) 00:19:07.339 15054.856 - 15160.135: 81.1053% ( 32) 00:19:07.339 15160.135 - 15265.414: 81.3561% ( 26) 00:19:07.339 15265.414 - 15370.692: 81.5779% ( 23) 00:19:07.339 15370.692 - 15475.971: 81.8769% ( 31) 00:19:07.339 15475.971 - 15581.250: 82.2434% ( 38) 00:19:07.339 15581.250 - 15686.529: 82.8800% ( 66) 00:19:07.339 15686.529 - 15791.807: 83.9313% ( 109) 00:19:07.339 15791.807 - 15897.086: 84.8476% ( 95) 00:19:07.339 15897.086 - 16002.365: 85.7542% ( 94) 00:19:07.339 16002.365 - 16107.643: 86.4969% ( 77) 00:19:07.339 16107.643 - 16212.922: 87.2685% ( 80) 00:19:07.339 16212.922 - 16318.201: 88.1366% ( 90) 00:19:07.339 16318.201 - 16423.480: 89.0432% ( 94) 00:19:07.339 16423.480 - 16528.758: 89.9595% ( 95) 00:19:07.339 16528.758 - 16634.037: 90.7311% ( 80) 00:19:07.339 16634.037 - 16739.316: 91.4255% ( 72) 00:19:07.339 16739.316 - 16844.594: 92.0332% ( 63) 00:19:07.339 16844.594 - 16949.873: 92.5251% ( 51) 00:19:07.339 16949.873 - 17055.152: 93.0170% ( 51) 00:19:07.339 17055.152 - 17160.431: 93.6053% ( 61) 00:19:07.339 17160.431 - 17265.709: 94.1165% ( 53) 00:19:07.339 17265.709 - 17370.988: 94.3962% ( 29) 00:19:07.339 17370.988 - 17476.267: 94.6759% ( 29) 00:19:07.339 17476.267 - 17581.545: 94.8495% ( 18) 00:19:07.339 17581.545 - 17686.824: 95.0328% ( 19) 00:19:07.339 17686.824 - 17792.103: 95.1485% ( 12) 00:19:07.339 17792.103 - 17897.382: 95.2546% ( 11) 00:19:07.339 17897.382 - 18002.660: 95.3993% ( 15) 00:19:07.339 18002.660 - 18107.939: 95.5826% ( 19) 00:19:07.339 18107.939 - 18213.218: 95.8912% ( 32) 00:19:07.339 18213.218 - 18318.496: 96.1709% ( 29) 00:19:07.339 18318.496 - 18423.775: 96.5278% ( 37) 00:19:07.339 18423.775 - 18529.054: 97.0486% ( 54) 00:19:07.339 18529.054 - 18634.333: 97.4344% ( 40) 00:19:07.339 18634.333 - 18739.611: 97.6080% ( 18) 00:19:07.339 18739.611 - 18844.890: 97.7527% ( 15) 00:19:07.339 18844.890 - 18950.169: 97.8492% ( 10) 00:19:07.339 18950.169 - 19055.447: 97.9263% ( 8) 00:19:07.339 19055.447 - 19160.726: 98.0710% ( 15) 00:19:07.339 19160.726 - 19266.005: 98.2253% ( 16) 00:19:07.339 19266.005 - 19371.284: 98.4375% ( 22) 00:19:07.339 19371.284 - 19476.562: 98.6400% ( 21) 00:19:07.339 19476.562 - 19581.841: 98.6883% ( 5) 00:19:07.339 19581.841 - 19687.120: 98.7172% ( 3) 00:19:07.339 19687.120 - 19792.398: 98.7558% ( 4) 00:19:07.339 19792.398 - 19897.677: 98.7654% ( 1) 00:19:07.339 29478.040 - 29688.598: 98.7751% ( 1) 00:19:07.339 29688.598 - 29899.155: 98.8426% ( 7) 00:19:07.339 29899.155 - 30109.712: 98.9198% ( 8) 00:19:07.339 30109.712 - 30320.270: 98.9969% ( 8) 00:19:07.339 30320.270 - 30530.827: 99.0741% ( 8) 00:19:07.339 30530.827 - 30741.385: 99.1416% ( 7) 00:19:07.339 30741.385 - 30951.942: 99.2188% ( 8) 00:19:07.339 30951.942 - 31162.500: 99.2863% ( 7) 00:19:07.339 31162.500 - 31373.057: 99.3538% ( 7) 00:19:07.339 31373.057 - 31583.614: 99.3827% ( 3) 00:19:07.339 38110.895 - 38321.452: 99.4502% ( 7) 00:19:07.339 38321.452 - 38532.010: 99.5274% ( 8) 00:19:07.339 38532.010 - 38742.567: 99.5949% ( 7) 00:19:07.339 38742.567 - 38953.124: 99.6721% ( 8) 00:19:07.339 38953.124 - 39163.682: 99.7396% ( 7) 00:19:07.339 39163.682 - 39374.239: 99.8071% ( 7) 00:19:07.339 39374.239 - 39584.797: 99.8843% ( 8) 00:19:07.339 39584.797 - 39795.354: 99.9518% ( 7) 00:19:07.339 39795.354 - 40005.912: 100.0000% ( 5) 00:19:07.339 00:19:07.339 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:19:07.339 ============================================================================== 00:19:07.339 Range in us Cumulative IO count 00:19:07.339 7632.707 - 7685.346: 0.0096% ( 1) 00:19:07.339 7685.346 - 7737.986: 0.0579% ( 5) 00:19:07.339 7737.986 - 7790.625: 0.1543% ( 10) 00:19:07.339 7790.625 - 7843.264: 0.2508% ( 10) 00:19:07.339 7843.264 - 7895.904: 0.4051% ( 16) 00:19:07.339 7895.904 - 7948.543: 0.5112% ( 11) 00:19:07.339 7948.543 - 8001.182: 0.7330% ( 23) 00:19:07.339 8001.182 - 8053.822: 0.9934% ( 27) 00:19:07.340 8053.822 - 8106.461: 1.4564% ( 48) 00:19:07.340 8106.461 - 8159.100: 1.6397% ( 19) 00:19:07.340 8159.100 - 8211.740: 1.8711% ( 24) 00:19:07.340 8211.740 - 8264.379: 2.0930% ( 23) 00:19:07.340 8264.379 - 8317.018: 2.5656% ( 49) 00:19:07.340 8317.018 - 8369.658: 3.0671% ( 52) 00:19:07.340 8369.658 - 8422.297: 3.6458% ( 60) 00:19:07.340 8422.297 - 8474.937: 4.1281% ( 50) 00:19:07.340 8474.937 - 8527.576: 4.6682% ( 56) 00:19:07.340 8527.576 - 8580.215: 5.1601% ( 51) 00:19:07.340 8580.215 - 8632.855: 5.6520% ( 51) 00:19:07.340 8632.855 - 8685.494: 6.2018% ( 57) 00:19:07.340 8685.494 - 8738.133: 6.7998% ( 62) 00:19:07.340 8738.133 - 8790.773: 7.3688% ( 59) 00:19:07.340 8790.773 - 8843.412: 8.0440% ( 70) 00:19:07.340 8843.412 - 8896.051: 8.4491% ( 42) 00:19:07.340 8896.051 - 8948.691: 8.9120% ( 48) 00:19:07.340 8948.691 - 9001.330: 9.4907% ( 60) 00:19:07.340 9001.330 - 9053.969: 10.1948% ( 73) 00:19:07.340 9053.969 - 9106.609: 10.9664% ( 80) 00:19:07.340 9106.609 - 9159.248: 11.6609% ( 72) 00:19:07.340 9159.248 - 9211.888: 12.3553% ( 72) 00:19:07.340 9211.888 - 9264.527: 12.9630% ( 63) 00:19:07.340 9264.527 - 9317.166: 13.5706% ( 63) 00:19:07.340 9317.166 - 9369.806: 14.2168% ( 67) 00:19:07.340 9369.806 - 9422.445: 15.2392% ( 106) 00:19:07.340 9422.445 - 9475.084: 16.3387% ( 114) 00:19:07.340 9475.084 - 9527.724: 17.2936% ( 99) 00:19:07.340 9527.724 - 9580.363: 18.3063% ( 105) 00:19:07.340 9580.363 - 9633.002: 19.5505% ( 129) 00:19:07.340 9633.002 - 9685.642: 20.7755% ( 127) 00:19:07.340 9685.642 - 9738.281: 22.0004% ( 127) 00:19:07.340 9738.281 - 9790.920: 23.3507% ( 140) 00:19:07.340 9790.920 - 9843.560: 24.5563% ( 125) 00:19:07.340 9843.560 - 9896.199: 25.8488% ( 134) 00:19:07.340 9896.199 - 9948.839: 27.0737% ( 127) 00:19:07.340 9948.839 - 10001.478: 28.2311% ( 120) 00:19:07.340 10001.478 - 10054.117: 29.4367% ( 125) 00:19:07.340 10054.117 - 10106.757: 30.5845% ( 119) 00:19:07.340 10106.757 - 10159.396: 31.5683% ( 102) 00:19:07.340 10159.396 - 10212.035: 32.7160% ( 119) 00:19:07.340 10212.035 - 10264.675: 33.7674% ( 109) 00:19:07.340 10264.675 - 10317.314: 34.5390% ( 80) 00:19:07.340 10317.314 - 10369.953: 35.2431% ( 73) 00:19:07.340 10369.953 - 10422.593: 36.0532% ( 84) 00:19:07.340 10422.593 - 10475.232: 37.0274% ( 101) 00:19:07.340 10475.232 - 10527.871: 37.9533% ( 96) 00:19:07.340 10527.871 - 10580.511: 39.1590% ( 125) 00:19:07.340 10580.511 - 10633.150: 40.1524% ( 103) 00:19:07.340 10633.150 - 10685.790: 41.2230% ( 111) 00:19:07.340 10685.790 - 10738.429: 42.2261% ( 104) 00:19:07.340 10738.429 - 10791.068: 43.3449% ( 116) 00:19:07.340 10791.068 - 10843.708: 44.2323% ( 92) 00:19:07.340 10843.708 - 10896.347: 45.1003% ( 90) 00:19:07.340 10896.347 - 10948.986: 46.0262% ( 96) 00:19:07.340 10948.986 - 11001.626: 46.7110% ( 71) 00:19:07.340 11001.626 - 11054.265: 47.1644% ( 47) 00:19:07.340 11054.265 - 11106.904: 47.5984% ( 45) 00:19:07.340 11106.904 - 11159.544: 47.9842% ( 40) 00:19:07.340 11159.544 - 11212.183: 48.4664% ( 50) 00:19:07.340 11212.183 - 11264.822: 49.1416% ( 70) 00:19:07.340 11264.822 - 11317.462: 49.7878% ( 67) 00:19:07.340 11317.462 - 11370.101: 50.3569% ( 59) 00:19:07.340 11370.101 - 11422.741: 51.0899% ( 76) 00:19:07.340 11422.741 - 11475.380: 51.9676% ( 91) 00:19:07.340 11475.380 - 11528.019: 52.6524% ( 71) 00:19:07.340 11528.019 - 11580.659: 53.4626% ( 84) 00:19:07.340 11580.659 - 11633.298: 54.2824% ( 85) 00:19:07.340 11633.298 - 11685.937: 54.9769% ( 72) 00:19:07.340 11685.937 - 11738.577: 55.6134% ( 66) 00:19:07.340 11738.577 - 11791.216: 56.2404% ( 65) 00:19:07.340 11791.216 - 11843.855: 56.6937% ( 47) 00:19:07.340 11843.855 - 11896.495: 57.1277% ( 45) 00:19:07.340 11896.495 - 11949.134: 57.8125% ( 71) 00:19:07.340 11949.134 - 12001.773: 58.2562% ( 46) 00:19:07.340 12001.773 - 12054.413: 58.6902% ( 45) 00:19:07.340 12054.413 - 12107.052: 59.1532% ( 48) 00:19:07.340 12107.052 - 12159.692: 59.4907% ( 35) 00:19:07.340 12159.692 - 12212.331: 59.7512% ( 27) 00:19:07.340 12212.331 - 12264.970: 60.0791% ( 34) 00:19:07.340 12264.970 - 12317.610: 60.6867% ( 63) 00:19:07.340 12317.610 - 12370.249: 61.1497% ( 48) 00:19:07.340 12370.249 - 12422.888: 61.6223% ( 49) 00:19:07.340 12422.888 - 12475.528: 62.2299% ( 63) 00:19:07.340 12475.528 - 12528.167: 62.7411% ( 53) 00:19:07.340 12528.167 - 12580.806: 63.1269% ( 40) 00:19:07.340 12580.806 - 12633.446: 63.5995% ( 49) 00:19:07.340 12633.446 - 12686.085: 64.1397% ( 56) 00:19:07.340 12686.085 - 12738.724: 64.6123% ( 49) 00:19:07.340 12738.724 - 12791.364: 65.1620% ( 57) 00:19:07.340 12791.364 - 12844.003: 65.6539% ( 51) 00:19:07.340 12844.003 - 12896.643: 66.1265% ( 49) 00:19:07.340 12896.643 - 12949.282: 67.0235% ( 93) 00:19:07.340 12949.282 - 13001.921: 67.7276% ( 73) 00:19:07.340 13001.921 - 13054.561: 68.1809% ( 47) 00:19:07.340 13054.561 - 13107.200: 68.6632% ( 50) 00:19:07.340 13107.200 - 13159.839: 69.1840% ( 54) 00:19:07.340 13159.839 - 13212.479: 69.6952% ( 53) 00:19:07.340 13212.479 - 13265.118: 70.1196% ( 44) 00:19:07.340 13265.118 - 13317.757: 70.4475% ( 34) 00:19:07.340 13317.757 - 13370.397: 70.8044% ( 37) 00:19:07.340 13370.397 - 13423.036: 71.1806% ( 39) 00:19:07.340 13423.036 - 13475.676: 71.6049% ( 44) 00:19:07.340 13475.676 - 13580.954: 72.4923% ( 92) 00:19:07.340 13580.954 - 13686.233: 73.3989% ( 94) 00:19:07.340 13686.233 - 13791.512: 73.9101% ( 53) 00:19:07.340 13791.512 - 13896.790: 74.6238% ( 74) 00:19:07.340 13896.790 - 14002.069: 75.5112% ( 92) 00:19:07.340 14002.069 - 14107.348: 76.2731% ( 79) 00:19:07.340 14107.348 - 14212.627: 76.7843% ( 53) 00:19:07.340 14212.627 - 14317.905: 77.3052% ( 54) 00:19:07.340 14317.905 - 14423.184: 77.8164% ( 53) 00:19:07.340 14423.184 - 14528.463: 78.1636% ( 36) 00:19:07.340 14528.463 - 14633.741: 78.5687% ( 42) 00:19:07.340 14633.741 - 14739.020: 78.7809% ( 22) 00:19:07.340 14739.020 - 14844.299: 79.1570% ( 39) 00:19:07.340 14844.299 - 14949.578: 79.7068% ( 57) 00:19:07.340 14949.578 - 15054.856: 80.0733% ( 38) 00:19:07.340 15054.856 - 15160.135: 80.3627% ( 30) 00:19:07.340 15160.135 - 15265.414: 80.7581% ( 41) 00:19:07.340 15265.414 - 15370.692: 81.1728% ( 43) 00:19:07.340 15370.692 - 15475.971: 81.6069% ( 45) 00:19:07.340 15475.971 - 15581.250: 82.1663% ( 58) 00:19:07.340 15581.250 - 15686.529: 83.2658% ( 114) 00:19:07.340 15686.529 - 15791.807: 84.2882% ( 106) 00:19:07.340 15791.807 - 15897.086: 85.1755% ( 92) 00:19:07.340 15897.086 - 16002.365: 85.8796% ( 73) 00:19:07.340 16002.365 - 16107.643: 86.5451% ( 69) 00:19:07.340 16107.643 - 16212.922: 87.5289% ( 102) 00:19:07.340 16212.922 - 16318.201: 88.5802% ( 109) 00:19:07.340 16318.201 - 16423.480: 89.5544% ( 101) 00:19:07.340 16423.480 - 16528.758: 90.3260% ( 80) 00:19:07.340 16528.758 - 16634.037: 91.0976% ( 80) 00:19:07.340 16634.037 - 16739.316: 92.0525% ( 99) 00:19:07.340 16739.316 - 16844.594: 92.5637% ( 53) 00:19:07.340 16844.594 - 16949.873: 93.0170% ( 47) 00:19:07.340 16949.873 - 17055.152: 93.4124% ( 41) 00:19:07.340 17055.152 - 17160.431: 93.9525% ( 56) 00:19:07.340 17160.431 - 17265.709: 94.3287% ( 39) 00:19:07.340 17265.709 - 17370.988: 94.6277% ( 31) 00:19:07.340 17370.988 - 17476.267: 95.0328% ( 42) 00:19:07.340 17476.267 - 17581.545: 95.2932% ( 27) 00:19:07.340 17581.545 - 17686.824: 95.5729% ( 29) 00:19:07.340 17686.824 - 17792.103: 95.7369% ( 17) 00:19:07.340 17792.103 - 17897.382: 95.8623% ( 13) 00:19:07.340 17897.382 - 18002.660: 96.0166% ( 16) 00:19:07.340 18002.660 - 18107.939: 96.1613% ( 15) 00:19:07.340 18107.939 - 18213.218: 96.4410% ( 29) 00:19:07.340 18213.218 - 18318.496: 96.8364% ( 41) 00:19:07.340 18318.496 - 18423.775: 97.2029% ( 38) 00:19:07.340 18423.775 - 18529.054: 97.4151% ( 22) 00:19:07.340 18529.054 - 18634.333: 97.5598% ( 15) 00:19:07.340 18634.333 - 18739.611: 97.6755% ( 12) 00:19:07.340 18739.611 - 18844.890: 97.8009% ( 13) 00:19:07.340 18844.890 - 18950.169: 97.9456% ( 15) 00:19:07.340 18950.169 - 19055.447: 98.0228% ( 8) 00:19:07.340 19055.447 - 19160.726: 98.0421% ( 2) 00:19:07.340 19160.726 - 19266.005: 98.0806% ( 4) 00:19:07.340 19266.005 - 19371.284: 98.1385% ( 6) 00:19:07.340 19371.284 - 19476.562: 98.2253% ( 9) 00:19:07.340 19476.562 - 19581.841: 98.3025% ( 8) 00:19:07.340 19581.841 - 19687.120: 98.5436% ( 25) 00:19:07.340 19687.120 - 19792.398: 98.6304% ( 9) 00:19:07.340 19792.398 - 19897.677: 98.6690% ( 4) 00:19:07.340 19897.677 - 20002.956: 98.6979% ( 3) 00:19:07.340 20002.956 - 20108.235: 98.7461% ( 5) 00:19:07.340 20108.235 - 20213.513: 98.7654% ( 2) 00:19:07.340 28004.138 - 28214.696: 98.8040% ( 4) 00:19:07.340 28214.696 - 28425.253: 98.8812% ( 8) 00:19:07.340 28425.253 - 28635.810: 98.9583% ( 8) 00:19:07.340 28635.810 - 28846.368: 99.0258% ( 7) 00:19:07.340 28846.368 - 29056.925: 99.0934% ( 7) 00:19:07.340 29056.925 - 29267.483: 99.1705% ( 8) 00:19:07.340 29267.483 - 29478.040: 99.2477% ( 8) 00:19:07.340 29478.040 - 29688.598: 99.3152% ( 7) 00:19:07.340 29688.598 - 29899.155: 99.3827% ( 7) 00:19:07.340 36215.878 - 36426.435: 99.4502% ( 7) 00:19:07.340 36426.435 - 36636.993: 99.5274% ( 8) 00:19:07.340 36636.993 - 36847.550: 99.5949% ( 7) 00:19:07.340 36847.550 - 37058.108: 99.6721% ( 8) 00:19:07.340 37058.108 - 37268.665: 99.7396% ( 7) 00:19:07.340 37268.665 - 37479.222: 99.8167% ( 8) 00:19:07.340 37479.222 - 37689.780: 99.8939% ( 8) 00:19:07.340 37689.780 - 37900.337: 99.9614% ( 7) 00:19:07.340 37900.337 - 38110.895: 100.0000% ( 4) 00:19:07.340 00:19:07.340 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:19:07.340 ============================================================================== 00:19:07.340 Range in us Cumulative IO count 00:19:07.340 7685.346 - 7737.986: 0.0192% ( 2) 00:19:07.340 7843.264 - 7895.904: 0.0863% ( 7) 00:19:07.340 7895.904 - 7948.543: 0.2109% ( 13) 00:19:07.340 7948.543 - 8001.182: 0.3451% ( 14) 00:19:07.340 8001.182 - 8053.822: 0.6806% ( 35) 00:19:07.340 8053.822 - 8106.461: 0.9586% ( 29) 00:19:07.340 8106.461 - 8159.100: 1.4379% ( 50) 00:19:07.340 8159.100 - 8211.740: 1.8788% ( 46) 00:19:07.340 8211.740 - 8264.379: 2.2431% ( 38) 00:19:07.340 8264.379 - 8317.018: 2.8374% ( 62) 00:19:07.340 8317.018 - 8369.658: 3.3646% ( 55) 00:19:07.340 8369.658 - 8422.297: 3.8152% ( 47) 00:19:07.340 8422.297 - 8474.937: 4.2849% ( 49) 00:19:07.340 8474.937 - 8527.576: 4.8505% ( 59) 00:19:07.341 8527.576 - 8580.215: 5.3298% ( 50) 00:19:07.341 8580.215 - 8632.855: 5.8953% ( 59) 00:19:07.341 8632.855 - 8685.494: 6.3171% ( 44) 00:19:07.341 8685.494 - 8738.133: 7.0744% ( 79) 00:19:07.341 8738.133 - 8790.773: 7.6016% ( 55) 00:19:07.341 8790.773 - 8843.412: 8.2151% ( 64) 00:19:07.341 8843.412 - 8896.051: 8.7807% ( 59) 00:19:07.341 8896.051 - 8948.691: 9.1833% ( 42) 00:19:07.341 8948.691 - 9001.330: 9.6051% ( 44) 00:19:07.341 9001.330 - 9053.969: 10.0460% ( 46) 00:19:07.341 9053.969 - 9106.609: 10.6308% ( 61) 00:19:07.341 9106.609 - 9159.248: 11.1196% ( 51) 00:19:07.341 9159.248 - 9211.888: 11.9824% ( 90) 00:19:07.341 9211.888 - 9264.527: 12.5479% ( 59) 00:19:07.341 9264.527 - 9317.166: 13.0656% ( 54) 00:19:07.341 9317.166 - 9369.806: 13.5449% ( 50) 00:19:07.341 9369.806 - 9422.445: 14.5130% ( 101) 00:19:07.341 9422.445 - 9475.084: 15.3854% ( 91) 00:19:07.341 9475.084 - 9527.724: 16.5548% ( 122) 00:19:07.341 9527.724 - 9580.363: 17.7722% ( 127) 00:19:07.341 9580.363 - 9633.002: 19.1526% ( 144) 00:19:07.341 9633.002 - 9685.642: 20.4371% ( 134) 00:19:07.341 9685.642 - 9738.281: 21.6354% ( 125) 00:19:07.341 9738.281 - 9790.920: 23.1787% ( 161) 00:19:07.341 9790.920 - 9843.560: 24.4153% ( 129) 00:19:07.341 9843.560 - 9896.199: 25.5847% ( 122) 00:19:07.341 9896.199 - 9948.839: 26.6679% ( 113) 00:19:07.341 9948.839 - 10001.478: 27.8374% ( 122) 00:19:07.341 10001.478 - 10054.117: 28.9494% ( 116) 00:19:07.341 10054.117 - 10106.757: 30.0230% ( 112) 00:19:07.341 10106.757 - 10159.396: 31.3459% ( 138) 00:19:07.341 10159.396 - 10212.035: 32.7550% ( 147) 00:19:07.341 10212.035 - 10264.675: 33.9724% ( 127) 00:19:07.341 10264.675 - 10317.314: 35.1419% ( 122) 00:19:07.341 10317.314 - 10369.953: 36.1196% ( 102) 00:19:07.341 10369.953 - 10422.593: 36.7427% ( 65) 00:19:07.341 10422.593 - 10475.232: 37.5767% ( 87) 00:19:07.341 10475.232 - 10527.871: 38.3723% ( 83) 00:19:07.341 10527.871 - 10580.511: 39.3021% ( 97) 00:19:07.341 10580.511 - 10633.150: 40.4620% ( 121) 00:19:07.341 10633.150 - 10685.790: 41.3439% ( 92) 00:19:07.341 10685.790 - 10738.429: 42.0245% ( 71) 00:19:07.341 10738.429 - 10791.068: 42.6189% ( 62) 00:19:07.341 10791.068 - 10843.708: 43.5199% ( 94) 00:19:07.341 10843.708 - 10896.347: 44.2101% ( 72) 00:19:07.341 10896.347 - 10948.986: 44.8907% ( 71) 00:19:07.341 10948.986 - 11001.626: 45.7535% ( 90) 00:19:07.341 11001.626 - 11054.265: 46.6929% ( 98) 00:19:07.341 11054.265 - 11106.904: 47.3926% ( 73) 00:19:07.341 11106.904 - 11159.544: 48.0637% ( 70) 00:19:07.341 11159.544 - 11212.183: 48.7538% ( 72) 00:19:07.341 11212.183 - 11264.822: 49.4824% ( 76) 00:19:07.341 11264.822 - 11317.462: 50.2492% ( 80) 00:19:07.341 11317.462 - 11370.101: 50.8244% ( 60) 00:19:07.341 11370.101 - 11422.741: 51.2558% ( 45) 00:19:07.341 11422.741 - 11475.380: 51.8884% ( 66) 00:19:07.341 11475.380 - 11528.019: 52.6457% ( 79) 00:19:07.341 11528.019 - 11580.659: 53.4605% ( 85) 00:19:07.341 11580.659 - 11633.298: 54.2178% ( 79) 00:19:07.341 11633.298 - 11685.937: 54.8984% ( 71) 00:19:07.341 11685.937 - 11738.577: 55.4735% ( 60) 00:19:07.341 11738.577 - 11791.216: 56.1254% ( 68) 00:19:07.341 11791.216 - 11843.855: 56.6047% ( 50) 00:19:07.341 11843.855 - 11896.495: 57.0265% ( 44) 00:19:07.341 11896.495 - 11949.134: 57.5537% ( 55) 00:19:07.341 11949.134 - 12001.773: 58.0905% ( 56) 00:19:07.341 12001.773 - 12054.413: 58.6752% ( 61) 00:19:07.341 12054.413 - 12107.052: 59.1833% ( 53) 00:19:07.341 12107.052 - 12159.692: 59.5763% ( 41) 00:19:07.341 12159.692 - 12212.331: 60.1419% ( 59) 00:19:07.341 12212.331 - 12264.970: 60.4486% ( 32) 00:19:07.341 12264.970 - 12317.610: 60.8416% ( 41) 00:19:07.341 12317.610 - 12370.249: 61.1676% ( 34) 00:19:07.341 12370.249 - 12422.888: 61.5127% ( 36) 00:19:07.341 12422.888 - 12475.528: 61.8673% ( 37) 00:19:07.341 12475.528 - 12528.167: 62.4617% ( 62) 00:19:07.341 12528.167 - 12580.806: 62.9889% ( 55) 00:19:07.341 12580.806 - 12633.446: 63.4873% ( 52) 00:19:07.341 12633.446 - 12686.085: 63.8708% ( 40) 00:19:07.341 12686.085 - 12738.724: 64.2638% ( 41) 00:19:07.341 12738.724 - 12791.364: 64.6760% ( 43) 00:19:07.341 12791.364 - 12844.003: 65.0978% ( 44) 00:19:07.341 12844.003 - 12896.643: 65.7880% ( 72) 00:19:07.341 12896.643 - 12949.282: 66.3248% ( 56) 00:19:07.341 12949.282 - 13001.921: 66.7945% ( 49) 00:19:07.341 13001.921 - 13054.561: 67.3025% ( 53) 00:19:07.341 13054.561 - 13107.200: 67.8393% ( 56) 00:19:07.341 13107.200 - 13159.839: 68.4433% ( 63) 00:19:07.341 13159.839 - 13212.479: 69.1526% ( 74) 00:19:07.341 13212.479 - 13265.118: 69.8715% ( 75) 00:19:07.341 13265.118 - 13317.757: 70.3700% ( 52) 00:19:07.341 13317.757 - 13370.397: 70.7918% ( 44) 00:19:07.341 13370.397 - 13423.036: 71.3574% ( 59) 00:19:07.341 13423.036 - 13475.676: 71.8654% ( 53) 00:19:07.341 13475.676 - 13580.954: 72.6419% ( 81) 00:19:07.341 13580.954 - 13686.233: 73.3992% ( 79) 00:19:07.341 13686.233 - 13791.512: 73.8593% ( 48) 00:19:07.341 13791.512 - 13896.790: 74.2044% ( 36) 00:19:07.341 13896.790 - 14002.069: 74.7124% ( 53) 00:19:07.341 14002.069 - 14107.348: 75.2301% ( 54) 00:19:07.341 14107.348 - 14212.627: 75.6710% ( 46) 00:19:07.341 14212.627 - 14317.905: 76.1503% ( 50) 00:19:07.341 14317.905 - 14423.184: 76.8501% ( 73) 00:19:07.341 14423.184 - 14528.463: 77.4732% ( 65) 00:19:07.341 14528.463 - 14633.741: 78.0771% ( 63) 00:19:07.341 14633.741 - 14739.020: 78.4701% ( 41) 00:19:07.341 14739.020 - 14844.299: 78.7673% ( 31) 00:19:07.341 14844.299 - 14949.578: 79.2274% ( 48) 00:19:07.341 14949.578 - 15054.856: 79.7738% ( 57) 00:19:07.341 15054.856 - 15160.135: 80.5119% ( 77) 00:19:07.341 15160.135 - 15265.414: 81.9114% ( 146) 00:19:07.341 15265.414 - 15370.692: 82.8892% ( 102) 00:19:07.341 15370.692 - 15475.971: 83.6656% ( 81) 00:19:07.341 15475.971 - 15581.250: 84.1258% ( 48) 00:19:07.341 15581.250 - 15686.529: 84.6626% ( 56) 00:19:07.341 15686.529 - 15791.807: 85.4486% ( 82) 00:19:07.341 15791.807 - 15897.086: 85.9375% ( 51) 00:19:07.341 15897.086 - 16002.365: 86.5222% ( 61) 00:19:07.341 16002.365 - 16107.643: 87.0878% ( 59) 00:19:07.341 16107.643 - 16212.922: 87.6150% ( 55) 00:19:07.341 16212.922 - 16318.201: 88.1039% ( 51) 00:19:07.341 16318.201 - 16423.480: 88.6215% ( 54) 00:19:07.341 16423.480 - 16528.758: 89.2542% ( 66) 00:19:07.341 16528.758 - 16634.037: 89.9444% ( 72) 00:19:07.341 16634.037 - 16739.316: 91.0276% ( 113) 00:19:07.341 16739.316 - 16844.594: 91.8041% ( 81) 00:19:07.341 16844.594 - 16949.873: 92.5997% ( 83) 00:19:07.341 16949.873 - 17055.152: 93.2803% ( 71) 00:19:07.341 17055.152 - 17160.431: 94.1334% ( 89) 00:19:07.341 17160.431 - 17265.709: 94.6607% ( 55) 00:19:07.341 17265.709 - 17370.988: 94.9674% ( 32) 00:19:07.341 17370.988 - 17476.267: 95.2358% ( 28) 00:19:07.341 17476.267 - 17581.545: 95.4371% ( 21) 00:19:07.341 17581.545 - 17686.824: 95.6768% ( 25) 00:19:07.341 17686.824 - 17792.103: 96.0219% ( 36) 00:19:07.341 17792.103 - 17897.382: 96.1369% ( 12) 00:19:07.341 17897.382 - 18002.660: 96.4340% ( 31) 00:19:07.341 18002.660 - 18107.939: 96.5970% ( 17) 00:19:07.341 18107.939 - 18213.218: 96.6929% ( 10) 00:19:07.341 18213.218 - 18318.496: 96.7983% ( 11) 00:19:07.341 18318.496 - 18423.775: 96.9038% ( 11) 00:19:07.341 18423.775 - 18529.054: 97.2201% ( 33) 00:19:07.341 18529.054 - 18634.333: 97.5556% ( 35) 00:19:07.341 18634.333 - 18739.611: 97.6994% ( 15) 00:19:07.341 18739.611 - 18844.890: 97.7665% ( 7) 00:19:07.341 18844.890 - 18950.169: 97.8336% ( 7) 00:19:07.341 18950.169 - 19055.447: 97.8911% ( 6) 00:19:07.341 19055.447 - 19160.726: 97.9390% ( 5) 00:19:07.341 19160.726 - 19266.005: 97.9870% ( 5) 00:19:07.341 19266.005 - 19371.284: 98.0253% ( 4) 00:19:07.341 19371.284 - 19476.562: 98.0637% ( 4) 00:19:07.341 19476.562 - 19581.841: 98.1595% ( 10) 00:19:07.341 19581.841 - 19687.120: 98.2841% ( 13) 00:19:07.341 19687.120 - 19792.398: 98.6100% ( 34) 00:19:07.341 19792.398 - 19897.677: 98.8305% ( 23) 00:19:07.341 19897.677 - 20002.956: 98.9168% ( 9) 00:19:07.341 20002.956 - 20108.235: 98.9935% ( 8) 00:19:07.341 20108.235 - 20213.513: 99.0318% ( 4) 00:19:07.341 20213.513 - 20318.792: 99.0702% ( 4) 00:19:07.341 20318.792 - 20424.071: 99.1085% ( 4) 00:19:07.341 20424.071 - 20529.349: 99.1564% ( 5) 00:19:07.341 20529.349 - 20634.628: 99.1852% ( 3) 00:19:07.341 20634.628 - 20739.907: 99.2235% ( 4) 00:19:07.341 20739.907 - 20845.186: 99.2619% ( 4) 00:19:07.341 20845.186 - 20950.464: 99.3002% ( 4) 00:19:07.341 20950.464 - 21055.743: 99.3386% ( 4) 00:19:07.341 21055.743 - 21161.022: 99.3673% ( 3) 00:19:07.341 21161.022 - 21266.300: 99.3865% ( 2) 00:19:07.341 27372.466 - 27583.023: 99.3961% ( 1) 00:19:07.341 27583.023 - 27793.581: 99.4632% ( 7) 00:19:07.341 27793.581 - 28004.138: 99.5399% ( 8) 00:19:07.341 28004.138 - 28214.696: 99.6070% ( 7) 00:19:07.341 28214.696 - 28425.253: 99.6837% ( 8) 00:19:07.341 28425.253 - 28635.810: 99.7508% ( 7) 00:19:07.341 28635.810 - 28846.368: 99.8179% ( 7) 00:19:07.341 28846.368 - 29056.925: 99.8850% ( 7) 00:19:07.341 29056.925 - 29267.483: 99.9521% ( 7) 00:19:07.341 29267.483 - 29478.040: 100.0000% ( 5) 00:19:07.341 00:19:07.341 12:21:16 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:19:07.341 00:19:07.341 real 0m2.618s 00:19:07.341 user 0m2.237s 00:19:07.341 sys 0m0.283s 00:19:07.341 12:21:16 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:07.341 12:21:16 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:19:07.341 ************************************ 00:19:07.341 END TEST nvme_perf 00:19:07.341 ************************************ 00:19:07.341 12:21:16 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:07.341 12:21:16 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:07.341 12:21:16 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:07.341 12:21:16 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:07.341 12:21:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.341 ************************************ 00:19:07.341 START TEST nvme_hello_world 00:19:07.341 ************************************ 00:19:07.341 12:21:16 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:07.341 Initializing NVMe Controllers 00:19:07.341 Attached to 0000:00:10.0 00:19:07.341 Namespace ID: 1 size: 6GB 00:19:07.341 Attached to 0000:00:11.0 00:19:07.341 Namespace ID: 1 size: 5GB 00:19:07.342 Attached to 0000:00:13.0 00:19:07.342 Namespace ID: 1 size: 1GB 00:19:07.342 Attached to 0000:00:12.0 00:19:07.342 Namespace ID: 1 size: 4GB 00:19:07.342 Namespace ID: 2 size: 4GB 00:19:07.342 Namespace ID: 3 size: 4GB 00:19:07.342 Initialization complete. 00:19:07.342 INFO: using host memory buffer for IO 00:19:07.342 Hello world! 00:19:07.342 INFO: using host memory buffer for IO 00:19:07.342 Hello world! 00:19:07.342 INFO: using host memory buffer for IO 00:19:07.342 Hello world! 00:19:07.342 INFO: using host memory buffer for IO 00:19:07.342 Hello world! 00:19:07.342 INFO: using host memory buffer for IO 00:19:07.342 Hello world! 00:19:07.342 INFO: using host memory buffer for IO 00:19:07.342 Hello world! 00:19:07.342 00:19:07.342 real 0m0.288s 00:19:07.342 user 0m0.121s 00:19:07.342 sys 0m0.124s 00:19:07.342 12:21:16 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:07.342 ************************************ 00:19:07.342 12:21:16 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:07.342 END TEST nvme_hello_world 00:19:07.342 ************************************ 00:19:07.601 12:21:16 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:07.601 12:21:16 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:07.601 12:21:16 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:07.601 12:21:16 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:07.601 12:21:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.601 ************************************ 00:19:07.601 START TEST nvme_sgl 00:19:07.601 ************************************ 00:19:07.601 12:21:16 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:07.861 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:19:07.861 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:19:07.861 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:19:07.861 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:19:07.861 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:19:07.861 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:19:07.861 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:19:07.861 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:19:07.861 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:19:07.861 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:19:07.861 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:19:07.861 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:19:07.861 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:19:07.861 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:19:07.861 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:19:07.861 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:19:07.861 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:19:07.861 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:19:07.861 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:19:07.861 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:19:07.861 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:19:07.861 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:19:07.861 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:19:07.861 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:19:07.861 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:19:07.861 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:19:07.861 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:19:07.861 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:19:07.861 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:19:07.861 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:19:07.861 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:19:07.861 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:19:07.861 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:19:07.861 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:19:07.861 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:19:07.861 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:19:07.861 NVMe Readv/Writev Request test 00:19:07.861 Attached to 0000:00:10.0 00:19:07.861 Attached to 0000:00:11.0 00:19:07.861 Attached to 0000:00:13.0 00:19:07.861 Attached to 0000:00:12.0 00:19:07.861 0000:00:10.0: build_io_request_2 test passed 00:19:07.861 0000:00:10.0: build_io_request_4 test passed 00:19:07.861 0000:00:10.0: build_io_request_5 test passed 00:19:07.861 0000:00:10.0: build_io_request_6 test passed 00:19:07.861 0000:00:10.0: build_io_request_7 test passed 00:19:07.861 0000:00:10.0: build_io_request_10 test passed 00:19:07.861 0000:00:11.0: build_io_request_2 test passed 00:19:07.861 0000:00:11.0: build_io_request_4 test passed 00:19:07.861 0000:00:11.0: build_io_request_5 test passed 00:19:07.861 0000:00:11.0: build_io_request_6 test passed 00:19:07.861 0000:00:11.0: build_io_request_7 test passed 00:19:07.861 0000:00:11.0: build_io_request_10 test passed 00:19:07.861 Cleaning up... 00:19:07.861 00:19:07.861 real 0m0.345s 00:19:07.861 user 0m0.162s 00:19:07.861 sys 0m0.142s 00:19:07.861 12:21:17 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:07.861 12:21:17 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:19:07.861 ************************************ 00:19:07.861 END TEST nvme_sgl 00:19:07.861 ************************************ 00:19:07.861 12:21:17 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:07.861 12:21:17 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:07.861 12:21:17 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:07.861 12:21:17 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:07.861 12:21:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.861 ************************************ 00:19:07.861 START TEST nvme_e2edp 00:19:07.861 ************************************ 00:19:07.861 12:21:17 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:08.120 NVMe Write/Read with End-to-End data protection test 00:19:08.120 Attached to 0000:00:10.0 00:19:08.120 Attached to 0000:00:11.0 00:19:08.120 Attached to 0000:00:13.0 00:19:08.120 Attached to 0000:00:12.0 00:19:08.120 Cleaning up... 00:19:08.120 00:19:08.120 real 0m0.258s 00:19:08.120 user 0m0.090s 00:19:08.120 sys 0m0.127s 00:19:08.120 12:21:17 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:08.120 12:21:17 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:19:08.120 ************************************ 00:19:08.120 END TEST nvme_e2edp 00:19:08.120 ************************************ 00:19:08.379 12:21:17 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:08.379 12:21:17 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:08.379 12:21:17 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:08.379 12:21:17 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.379 12:21:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:08.379 ************************************ 00:19:08.379 START TEST nvme_reserve 00:19:08.379 ************************************ 00:19:08.379 12:21:17 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:08.638 ===================================================== 00:19:08.638 NVMe Controller at PCI bus 0, device 16, function 0 00:19:08.638 ===================================================== 00:19:08.638 Reservations: Not Supported 00:19:08.638 ===================================================== 00:19:08.638 NVMe Controller at PCI bus 0, device 17, function 0 00:19:08.638 ===================================================== 00:19:08.638 Reservations: Not Supported 00:19:08.638 ===================================================== 00:19:08.638 NVMe Controller at PCI bus 0, device 19, function 0 00:19:08.638 ===================================================== 00:19:08.638 Reservations: Not Supported 00:19:08.638 ===================================================== 00:19:08.638 NVMe Controller at PCI bus 0, device 18, function 0 00:19:08.638 ===================================================== 00:19:08.638 Reservations: Not Supported 00:19:08.638 Reservation test passed 00:19:08.638 00:19:08.638 real 0m0.278s 00:19:08.638 user 0m0.098s 00:19:08.638 sys 0m0.138s 00:19:08.638 12:21:17 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:08.638 ************************************ 00:19:08.638 12:21:17 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:19:08.638 END TEST nvme_reserve 00:19:08.638 ************************************ 00:19:08.638 12:21:17 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:08.638 12:21:17 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:08.638 12:21:17 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:08.638 12:21:17 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.638 12:21:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:08.638 ************************************ 00:19:08.638 START TEST nvme_err_injection 00:19:08.638 ************************************ 00:19:08.638 12:21:17 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:08.896 NVMe Error Injection test 00:19:08.896 Attached to 0000:00:10.0 00:19:08.896 Attached to 0000:00:11.0 00:19:08.896 Attached to 0000:00:13.0 00:19:08.896 Attached to 0000:00:12.0 00:19:08.896 0000:00:13.0: get features failed as expected 00:19:08.896 0000:00:12.0: get features failed as expected 00:19:08.896 0000:00:10.0: get features failed as expected 00:19:08.896 0000:00:11.0: get features failed as expected 00:19:08.896 0000:00:10.0: get features successfully as expected 00:19:08.896 0000:00:11.0: get features successfully as expected 00:19:08.896 0000:00:13.0: get features successfully as expected 00:19:08.896 0000:00:12.0: get features successfully as expected 00:19:08.896 0000:00:10.0: read failed as expected 00:19:08.896 0000:00:11.0: read failed as expected 00:19:08.896 0000:00:13.0: read failed as expected 00:19:08.896 0000:00:12.0: read failed as expected 00:19:08.896 0000:00:10.0: read successfully as expected 00:19:08.896 0000:00:11.0: read successfully as expected 00:19:08.896 0000:00:13.0: read successfully as expected 00:19:08.896 0000:00:12.0: read successfully as expected 00:19:08.896 Cleaning up... 00:19:08.896 00:19:08.896 real 0m0.277s 00:19:08.896 user 0m0.106s 00:19:08.896 sys 0m0.128s 00:19:08.896 12:21:18 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:08.896 12:21:18 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:19:08.896 ************************************ 00:19:08.896 END TEST nvme_err_injection 00:19:08.896 ************************************ 00:19:08.896 12:21:18 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:08.896 12:21:18 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:08.896 12:21:18 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:19:08.896 12:21:18 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.896 12:21:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:08.896 ************************************ 00:19:08.896 START TEST nvme_overhead 00:19:08.896 ************************************ 00:19:08.896 12:21:18 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:10.272 Initializing NVMe Controllers 00:19:10.272 Attached to 0000:00:10.0 00:19:10.272 Attached to 0000:00:11.0 00:19:10.272 Attached to 0000:00:13.0 00:19:10.272 Attached to 0000:00:12.0 00:19:10.272 Initialization complete. Launching workers. 00:19:10.272 submit (in ns) avg, min, max = 13571.4, 11494.8, 64355.0 00:19:10.272 complete (in ns) avg, min, max = 8582.3, 7746.2, 93386.3 00:19:10.272 00:19:10.272 Submit histogram 00:19:10.272 ================ 00:19:10.272 Range in us Cumulative Count 00:19:10.272 11.463 - 11.515: 0.0136% ( 1) 00:19:10.272 11.823 - 11.875: 0.0272% ( 1) 00:19:10.272 11.978 - 12.029: 0.0408% ( 1) 00:19:10.272 12.029 - 12.080: 0.0816% ( 3) 00:19:10.272 12.080 - 12.132: 0.2175% ( 10) 00:19:10.272 12.132 - 12.183: 0.3263% ( 8) 00:19:10.272 12.183 - 12.235: 0.5303% ( 15) 00:19:10.272 12.235 - 12.286: 0.8294% ( 22) 00:19:10.272 12.286 - 12.337: 1.2101% ( 28) 00:19:10.272 12.337 - 12.389: 1.5228% ( 23) 00:19:10.272 12.389 - 12.440: 1.8219% ( 22) 00:19:10.272 12.440 - 12.492: 2.1074% ( 21) 00:19:10.272 12.492 - 12.543: 2.4473% ( 25) 00:19:10.272 12.543 - 12.594: 2.7736% ( 24) 00:19:10.272 12.594 - 12.646: 3.2359% ( 34) 00:19:10.272 12.646 - 12.697: 3.6710% ( 32) 00:19:10.272 12.697 - 12.749: 4.2556% ( 43) 00:19:10.272 12.749 - 12.800: 5.4385% ( 87) 00:19:10.272 12.800 - 12.851: 6.9613% ( 112) 00:19:10.272 12.851 - 12.903: 9.3270% ( 174) 00:19:10.272 12.903 - 12.954: 12.7396% ( 251) 00:19:10.272 12.954 - 13.006: 17.4031% ( 343) 00:19:10.272 13.006 - 13.057: 23.6982% ( 463) 00:19:10.272 13.057 - 13.108: 31.0537% ( 541) 00:19:10.272 13.108 - 13.160: 38.2325% ( 528) 00:19:10.272 13.160 - 13.263: 52.4949% ( 1049) 00:19:10.272 13.263 - 13.365: 64.3508% ( 872) 00:19:10.272 13.365 - 13.468: 74.5615% ( 751) 00:19:10.272 13.468 - 13.571: 81.6315% ( 520) 00:19:10.272 13.571 - 13.674: 86.4582% ( 355) 00:19:10.272 13.674 - 13.777: 89.4494% ( 220) 00:19:10.272 13.777 - 13.880: 91.5160% ( 152) 00:19:10.272 13.880 - 13.982: 92.5493% ( 76) 00:19:10.272 13.982 - 14.085: 93.4058% ( 63) 00:19:10.272 14.085 - 14.188: 93.9769% ( 42) 00:19:10.272 14.188 - 14.291: 94.2080% ( 17) 00:19:10.272 14.291 - 14.394: 94.3848% ( 13) 00:19:10.272 14.394 - 14.496: 94.5343% ( 11) 00:19:10.272 14.496 - 14.599: 94.5887% ( 4) 00:19:10.272 14.599 - 14.702: 94.6431% ( 4) 00:19:10.272 14.702 - 14.805: 94.6703% ( 2) 00:19:10.272 14.805 - 14.908: 94.6975% ( 2) 00:19:10.272 14.908 - 15.010: 94.7519% ( 4) 00:19:10.272 15.010 - 15.113: 94.8063% ( 4) 00:19:10.272 15.113 - 15.216: 94.8878% ( 6) 00:19:10.272 15.216 - 15.319: 94.9150% ( 2) 00:19:10.272 15.319 - 15.422: 94.9694% ( 4) 00:19:10.272 15.422 - 15.524: 94.9966% ( 2) 00:19:10.272 15.524 - 15.627: 95.0102% ( 1) 00:19:10.272 15.730 - 15.833: 95.0510% ( 3) 00:19:10.272 15.833 - 15.936: 95.0646% ( 1) 00:19:10.272 15.936 - 16.039: 95.0782% ( 1) 00:19:10.272 16.039 - 16.141: 95.1054% ( 2) 00:19:10.272 16.141 - 16.244: 95.1598% ( 4) 00:19:10.272 16.244 - 16.347: 95.2005% ( 3) 00:19:10.272 16.347 - 16.450: 95.2549% ( 4) 00:19:10.272 16.450 - 16.553: 95.3093% ( 4) 00:19:10.273 16.553 - 16.655: 95.3229% ( 1) 00:19:10.273 16.655 - 16.758: 95.3637% ( 3) 00:19:10.273 16.758 - 16.861: 95.4045% ( 3) 00:19:10.273 16.861 - 16.964: 95.4725% ( 5) 00:19:10.273 16.964 - 17.067: 95.5133% ( 3) 00:19:10.273 17.067 - 17.169: 95.5948% ( 6) 00:19:10.273 17.169 - 17.272: 95.7716% ( 13) 00:19:10.273 17.272 - 17.375: 95.9347% ( 12) 00:19:10.273 17.375 - 17.478: 96.1115% ( 13) 00:19:10.273 17.478 - 17.581: 96.3970% ( 21) 00:19:10.273 17.581 - 17.684: 96.4242% ( 2) 00:19:10.273 17.684 - 17.786: 96.5874% ( 12) 00:19:10.273 17.786 - 17.889: 96.7505% ( 12) 00:19:10.273 17.889 - 17.992: 96.8049% ( 4) 00:19:10.273 17.992 - 18.095: 96.9545% ( 11) 00:19:10.273 18.095 - 18.198: 97.1176% ( 12) 00:19:10.273 18.198 - 18.300: 97.2400% ( 9) 00:19:10.273 18.300 - 18.403: 97.2944% ( 4) 00:19:10.273 18.403 - 18.506: 97.4031% ( 8) 00:19:10.273 18.506 - 18.609: 97.4983% ( 7) 00:19:10.273 18.609 - 18.712: 97.5799% ( 6) 00:19:10.273 18.712 - 18.814: 97.7974% ( 16) 00:19:10.273 18.814 - 18.917: 97.8654% ( 5) 00:19:10.273 18.917 - 19.020: 97.9878% ( 9) 00:19:10.273 19.020 - 19.123: 98.1237% ( 10) 00:19:10.273 19.123 - 19.226: 98.2325% ( 8) 00:19:10.273 19.226 - 19.329: 98.3005% ( 5) 00:19:10.273 19.329 - 19.431: 98.3413% ( 3) 00:19:10.273 19.431 - 19.534: 98.4228% ( 6) 00:19:10.273 19.534 - 19.637: 98.4908% ( 5) 00:19:10.273 19.637 - 19.740: 98.5588% ( 5) 00:19:10.273 19.740 - 19.843: 98.6268% ( 5) 00:19:10.273 19.843 - 19.945: 98.7492% ( 9) 00:19:10.273 19.945 - 20.048: 98.8443% ( 7) 00:19:10.273 20.048 - 20.151: 98.9395% ( 7) 00:19:10.273 20.151 - 20.254: 99.0075% ( 5) 00:19:10.273 20.254 - 20.357: 99.0347% ( 2) 00:19:10.273 20.357 - 20.459: 99.1027% ( 5) 00:19:10.273 20.459 - 20.562: 99.1706% ( 5) 00:19:10.273 20.665 - 20.768: 99.1978% ( 2) 00:19:10.273 20.768 - 20.871: 99.2250% ( 2) 00:19:10.273 20.871 - 20.973: 99.2794% ( 4) 00:19:10.273 20.973 - 21.076: 99.3066% ( 2) 00:19:10.273 21.076 - 21.179: 99.3202% ( 1) 00:19:10.273 21.179 - 21.282: 99.3474% ( 2) 00:19:10.273 21.282 - 21.385: 99.3746% ( 2) 00:19:10.273 21.488 - 21.590: 99.3882% ( 1) 00:19:10.273 21.590 - 21.693: 99.4154% ( 2) 00:19:10.273 21.693 - 21.796: 99.4290% ( 1) 00:19:10.273 21.796 - 21.899: 99.4697% ( 3) 00:19:10.273 22.002 - 22.104: 99.4833% ( 1) 00:19:10.273 22.104 - 22.207: 99.4969% ( 1) 00:19:10.273 22.721 - 22.824: 99.5105% ( 1) 00:19:10.273 22.824 - 22.927: 99.5241% ( 1) 00:19:10.273 22.927 - 23.030: 99.5513% ( 2) 00:19:10.273 23.235 - 23.338: 99.5649% ( 1) 00:19:10.273 23.338 - 23.441: 99.5785% ( 1) 00:19:10.273 23.852 - 23.955: 99.6057% ( 2) 00:19:10.273 23.955 - 24.058: 99.6329% ( 2) 00:19:10.273 24.263 - 24.366: 99.6601% ( 2) 00:19:10.273 24.366 - 24.469: 99.6873% ( 2) 00:19:10.273 24.469 - 24.572: 99.7009% ( 1) 00:19:10.273 25.086 - 25.189: 99.7145% ( 1) 00:19:10.273 26.217 - 26.320: 99.7281% ( 1) 00:19:10.273 26.320 - 26.525: 99.7417% ( 1) 00:19:10.273 26.731 - 26.937: 99.7553% ( 1) 00:19:10.273 27.142 - 27.348: 99.7689% ( 1) 00:19:10.273 28.993 - 29.198: 99.7825% ( 1) 00:19:10.273 30.227 - 30.432: 99.8097% ( 2) 00:19:10.273 30.432 - 30.638: 99.8368% ( 2) 00:19:10.273 30.638 - 30.843: 99.8504% ( 1) 00:19:10.273 30.843 - 31.049: 99.8640% ( 1) 00:19:10.273 31.255 - 31.460: 99.8776% ( 1) 00:19:10.273 31.460 - 31.666: 99.8912% ( 1) 00:19:10.273 31.666 - 31.871: 99.9048% ( 1) 00:19:10.273 31.871 - 32.077: 99.9184% ( 1) 00:19:10.273 33.928 - 34.133: 99.9320% ( 1) 00:19:10.273 35.367 - 35.573: 99.9456% ( 1) 00:19:10.273 37.629 - 37.835: 99.9592% ( 1) 00:19:10.273 50.789 - 50.994: 99.9728% ( 1) 00:19:10.273 54.284 - 54.696: 99.9864% ( 1) 00:19:10.273 64.154 - 64.565: 100.0000% ( 1) 00:19:10.273 00:19:10.273 Complete histogram 00:19:10.273 ================== 00:19:10.273 Range in us Cumulative Count 00:19:10.273 7.711 - 7.762: 0.0408% ( 3) 00:19:10.273 7.762 - 7.814: 1.4276% ( 102) 00:19:10.273 7.814 - 7.865: 5.8600% ( 326) 00:19:10.273 7.865 - 7.916: 13.5826% ( 568) 00:19:10.273 7.916 - 7.968: 24.2284% ( 783) 00:19:10.273 7.968 - 8.019: 34.7111% ( 771) 00:19:10.273 8.019 - 8.071: 42.5969% ( 580) 00:19:10.273 8.071 - 8.122: 49.4494% ( 504) 00:19:10.273 8.122 - 8.173: 54.9966% ( 408) 00:19:10.273 8.173 - 8.225: 58.6540% ( 269) 00:19:10.273 8.225 - 8.276: 61.0741% ( 178) 00:19:10.273 8.276 - 8.328: 62.6513% ( 116) 00:19:10.273 8.328 - 8.379: 63.6710% ( 75) 00:19:10.273 8.379 - 8.431: 64.7179% ( 77) 00:19:10.273 8.431 - 8.482: 66.2407% ( 112) 00:19:10.273 8.482 - 8.533: 68.7016% ( 181) 00:19:10.273 8.533 - 8.585: 70.6050% ( 140) 00:19:10.273 8.585 - 8.636: 71.8151% ( 89) 00:19:10.273 8.636 - 8.688: 73.1883% ( 101) 00:19:10.273 8.688 - 8.739: 75.1734% ( 146) 00:19:10.273 8.739 - 8.790: 77.4303% ( 166) 00:19:10.273 8.790 - 8.842: 79.2386% ( 133) 00:19:10.273 8.842 - 8.893: 81.1829% ( 143) 00:19:10.273 8.893 - 8.945: 83.2631% ( 153) 00:19:10.273 8.945 - 8.996: 85.1937% ( 142) 00:19:10.273 8.996 - 9.047: 86.8797% ( 124) 00:19:10.273 9.047 - 9.099: 88.1849% ( 96) 00:19:10.273 9.099 - 9.150: 89.8436% ( 122) 00:19:10.273 9.150 - 9.202: 91.1217% ( 94) 00:19:10.273 9.202 - 9.253: 92.1958% ( 79) 00:19:10.273 9.253 - 9.304: 93.0659% ( 64) 00:19:10.273 9.304 - 9.356: 93.8001% ( 54) 00:19:10.273 9.356 - 9.407: 94.3848% ( 43) 00:19:10.273 9.407 - 9.459: 94.7655% ( 28) 00:19:10.273 9.459 - 9.510: 95.0646% ( 22) 00:19:10.273 9.510 - 9.561: 95.3093% ( 18) 00:19:10.273 9.561 - 9.613: 95.5812% ( 20) 00:19:10.273 9.613 - 9.664: 95.7308% ( 11) 00:19:10.273 9.664 - 9.716: 95.8532% ( 9) 00:19:10.273 9.716 - 9.767: 96.0299% ( 13) 00:19:10.273 9.767 - 9.818: 96.1523% ( 9) 00:19:10.273 9.818 - 9.870: 96.2475% ( 7) 00:19:10.273 9.870 - 9.921: 96.3018% ( 4) 00:19:10.273 9.921 - 9.973: 96.4106% ( 8) 00:19:10.273 9.973 - 10.024: 96.5058% ( 7) 00:19:10.273 10.024 - 10.076: 96.5466% ( 3) 00:19:10.273 10.076 - 10.127: 96.6281% ( 6) 00:19:10.273 10.127 - 10.178: 96.6825% ( 4) 00:19:10.273 10.178 - 10.230: 96.7097% ( 2) 00:19:10.273 10.230 - 10.281: 96.7505% ( 3) 00:19:10.273 10.281 - 10.333: 96.7777% ( 2) 00:19:10.273 10.333 - 10.384: 96.8049% ( 2) 00:19:10.273 10.384 - 10.435: 96.8321% ( 2) 00:19:10.273 10.435 - 10.487: 96.8729% ( 3) 00:19:10.273 10.487 - 10.538: 96.9137% ( 3) 00:19:10.273 10.538 - 10.590: 96.9409% ( 2) 00:19:10.273 10.590 - 10.641: 96.9680% ( 2) 00:19:10.273 10.641 - 10.692: 97.0088% ( 3) 00:19:10.273 10.744 - 10.795: 97.0360% ( 2) 00:19:10.273 10.898 - 10.949: 97.0768% ( 3) 00:19:10.273 10.949 - 11.001: 97.1040% ( 2) 00:19:10.273 11.001 - 11.052: 97.1584% ( 4) 00:19:10.273 11.155 - 11.206: 97.1720% ( 1) 00:19:10.273 11.206 - 11.258: 97.1856% ( 1) 00:19:10.273 11.258 - 11.309: 97.1992% ( 1) 00:19:10.273 11.309 - 11.361: 97.2128% ( 1) 00:19:10.273 11.412 - 11.463: 97.2536% ( 3) 00:19:10.273 11.566 - 11.618: 97.2808% ( 2) 00:19:10.273 11.618 - 11.669: 97.2944% ( 1) 00:19:10.273 11.875 - 11.926: 97.3080% ( 1) 00:19:10.273 11.926 - 11.978: 97.3215% ( 1) 00:19:10.273 11.978 - 12.029: 97.3351% ( 1) 00:19:10.273 12.080 - 12.132: 97.3487% ( 1) 00:19:10.273 12.132 - 12.183: 97.3623% ( 1) 00:19:10.273 12.235 - 12.286: 97.3759% ( 1) 00:19:10.273 12.337 - 12.389: 97.3895% ( 1) 00:19:10.273 12.389 - 12.440: 97.4031% ( 1) 00:19:10.273 12.440 - 12.492: 97.4303% ( 2) 00:19:10.273 12.543 - 12.594: 97.4439% ( 1) 00:19:10.273 12.594 - 12.646: 97.4575% ( 1) 00:19:10.273 12.646 - 12.697: 97.4711% ( 1) 00:19:10.273 12.697 - 12.749: 97.4847% ( 1) 00:19:10.273 12.749 - 12.800: 97.4983% ( 1) 00:19:10.273 12.800 - 12.851: 97.5119% ( 1) 00:19:10.273 12.851 - 12.903: 97.5255% ( 1) 00:19:10.273 12.954 - 13.006: 97.5391% ( 1) 00:19:10.273 13.006 - 13.057: 97.5527% ( 1) 00:19:10.273 13.057 - 13.108: 97.5663% ( 1) 00:19:10.273 13.108 - 13.160: 97.5799% ( 1) 00:19:10.273 13.160 - 13.263: 97.6071% ( 2) 00:19:10.273 13.263 - 13.365: 97.6207% ( 1) 00:19:10.273 13.365 - 13.468: 97.6479% ( 2) 00:19:10.273 13.468 - 13.571: 97.6615% ( 1) 00:19:10.273 13.571 - 13.674: 97.6886% ( 2) 00:19:10.273 13.674 - 13.777: 97.7430% ( 4) 00:19:10.273 13.777 - 13.880: 97.7974% ( 4) 00:19:10.273 13.880 - 13.982: 97.8654% ( 5) 00:19:10.273 13.982 - 14.085: 97.9334% ( 5) 00:19:10.273 14.085 - 14.188: 97.9878% ( 4) 00:19:10.273 14.188 - 14.291: 98.0286% ( 3) 00:19:10.273 14.291 - 14.394: 98.1509% ( 9) 00:19:10.273 14.394 - 14.496: 98.1781% ( 2) 00:19:10.273 14.496 - 14.599: 98.1917% ( 1) 00:19:10.273 14.599 - 14.702: 98.2325% ( 3) 00:19:10.273 14.702 - 14.805: 98.3413% ( 8) 00:19:10.273 14.805 - 14.908: 98.4364% ( 7) 00:19:10.273 14.908 - 15.010: 98.5044% ( 5) 00:19:10.273 15.010 - 15.113: 98.5316% ( 2) 00:19:10.273 15.113 - 15.216: 98.5724% ( 3) 00:19:10.273 15.216 - 15.319: 98.5996% ( 2) 00:19:10.273 15.319 - 15.422: 98.6812% ( 6) 00:19:10.273 15.422 - 15.524: 98.7084% ( 2) 00:19:10.273 15.524 - 15.627: 98.7220% ( 1) 00:19:10.273 15.627 - 15.730: 98.7763% ( 4) 00:19:10.273 15.730 - 15.833: 98.8307% ( 4) 00:19:10.273 15.833 - 15.936: 98.8579% ( 2) 00:19:10.273 15.936 - 16.039: 98.8851% ( 2) 00:19:10.273 16.244 - 16.347: 98.9123% ( 2) 00:19:10.274 16.347 - 16.450: 98.9395% ( 2) 00:19:10.274 16.553 - 16.655: 98.9531% ( 1) 00:19:10.274 16.758 - 16.861: 98.9667% ( 1) 00:19:10.274 16.861 - 16.964: 98.9803% ( 1) 00:19:10.274 17.067 - 17.169: 99.0075% ( 2) 00:19:10.274 17.169 - 17.272: 99.0347% ( 2) 00:19:10.274 17.375 - 17.478: 99.0891% ( 4) 00:19:10.274 17.581 - 17.684: 99.1298% ( 3) 00:19:10.274 17.684 - 17.786: 99.1706% ( 3) 00:19:10.274 17.786 - 17.889: 99.1978% ( 2) 00:19:10.274 17.889 - 17.992: 99.2114% ( 1) 00:19:10.274 17.992 - 18.095: 99.2386% ( 2) 00:19:10.274 18.095 - 18.198: 99.2658% ( 2) 00:19:10.274 18.198 - 18.300: 99.2930% ( 2) 00:19:10.274 18.300 - 18.403: 99.3202% ( 2) 00:19:10.274 18.403 - 18.506: 99.3610% ( 3) 00:19:10.274 18.712 - 18.814: 99.3746% ( 1) 00:19:10.274 18.814 - 18.917: 99.4154% ( 3) 00:19:10.274 18.917 - 19.020: 99.4426% ( 2) 00:19:10.274 19.329 - 19.431: 99.4697% ( 2) 00:19:10.274 19.431 - 19.534: 99.4833% ( 1) 00:19:10.274 19.637 - 19.740: 99.5513% ( 5) 00:19:10.274 19.945 - 20.048: 99.5649% ( 1) 00:19:10.274 20.151 - 20.254: 99.5921% ( 2) 00:19:10.274 20.357 - 20.459: 99.6193% ( 2) 00:19:10.274 20.459 - 20.562: 99.6329% ( 1) 00:19:10.274 20.562 - 20.665: 99.6601% ( 2) 00:19:10.274 20.665 - 20.768: 99.6873% ( 2) 00:19:10.274 20.768 - 20.871: 99.7009% ( 1) 00:19:10.274 20.871 - 20.973: 99.7145% ( 1) 00:19:10.274 21.076 - 21.179: 99.7281% ( 1) 00:19:10.274 22.207 - 22.310: 99.7417% ( 1) 00:19:10.274 22.721 - 22.824: 99.7553% ( 1) 00:19:10.274 22.824 - 22.927: 99.7689% ( 1) 00:19:10.274 23.647 - 23.749: 99.7825% ( 1) 00:19:10.274 24.161 - 24.263: 99.7961% ( 1) 00:19:10.274 24.469 - 24.572: 99.8097% ( 1) 00:19:10.274 24.675 - 24.778: 99.8232% ( 1) 00:19:10.274 25.189 - 25.292: 99.8368% ( 1) 00:19:10.274 25.292 - 25.394: 99.8504% ( 1) 00:19:10.274 25.908 - 26.011: 99.8640% ( 1) 00:19:10.274 26.011 - 26.114: 99.8776% ( 1) 00:19:10.274 26.114 - 26.217: 99.8912% ( 1) 00:19:10.274 26.217 - 26.320: 99.9048% ( 1) 00:19:10.274 26.731 - 26.937: 99.9184% ( 1) 00:19:10.274 27.142 - 27.348: 99.9320% ( 1) 00:19:10.274 32.900 - 33.105: 99.9456% ( 1) 00:19:10.274 36.395 - 36.601: 99.9592% ( 1) 00:19:10.274 36.806 - 37.012: 99.9728% ( 1) 00:19:10.274 66.210 - 66.622: 99.9864% ( 1) 00:19:10.274 93.353 - 93.764: 100.0000% ( 1) 00:19:10.274 00:19:10.274 00:19:10.274 real 0m1.281s 00:19:10.274 user 0m1.081s 00:19:10.274 sys 0m0.154s 00:19:10.274 12:21:19 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:10.274 12:21:19 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:19:10.274 ************************************ 00:19:10.274 END TEST nvme_overhead 00:19:10.274 ************************************ 00:19:10.274 12:21:19 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:10.274 12:21:19 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:10.274 12:21:19 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:19:10.274 12:21:19 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:10.274 12:21:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:10.274 ************************************ 00:19:10.274 START TEST nvme_arbitration 00:19:10.274 ************************************ 00:19:10.274 12:21:19 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:13.598 Initializing NVMe Controllers 00:19:13.598 Attached to 0000:00:10.0 00:19:13.598 Attached to 0000:00:11.0 00:19:13.598 Attached to 0000:00:13.0 00:19:13.598 Attached to 0000:00:12.0 00:19:13.598 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:19:13.598 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:19:13.598 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:19:13.598 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:19:13.598 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:19:13.598 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:19:13.598 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:19:13.598 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:19:13.598 Initialization complete. Launching workers. 00:19:13.598 Starting thread on core 1 with urgent priority queue 00:19:13.598 Starting thread on core 2 with urgent priority queue 00:19:13.598 Starting thread on core 3 with urgent priority queue 00:19:13.598 Starting thread on core 0 with urgent priority queue 00:19:13.598 QEMU NVMe Ctrl (12340 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:19:13.598 QEMU NVMe Ctrl (12342 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:19:13.598 QEMU NVMe Ctrl (12341 ) core 1: 533.33 IO/s 187.50 secs/100000 ios 00:19:13.598 QEMU NVMe Ctrl (12342 ) core 1: 533.33 IO/s 187.50 secs/100000 ios 00:19:13.598 QEMU NVMe Ctrl (12343 ) core 2: 554.67 IO/s 180.29 secs/100000 ios 00:19:13.598 QEMU NVMe Ctrl (12342 ) core 3: 597.33 IO/s 167.41 secs/100000 ios 00:19:13.598 ======================================================== 00:19:13.598 00:19:13.856 00:19:13.856 real 0m3.455s 00:19:13.856 user 0m9.489s 00:19:13.856 sys 0m0.167s 00:19:13.856 12:21:23 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:13.856 12:21:23 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:19:13.856 ************************************ 00:19:13.856 END TEST nvme_arbitration 00:19:13.856 ************************************ 00:19:13.856 12:21:23 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:13.856 12:21:23 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:13.856 12:21:23 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:13.856 12:21:23 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:13.856 12:21:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:13.856 ************************************ 00:19:13.856 START TEST nvme_single_aen 00:19:13.856 ************************************ 00:19:13.856 12:21:23 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:14.114 Asynchronous Event Request test 00:19:14.114 Attached to 0000:00:10.0 00:19:14.114 Attached to 0000:00:11.0 00:19:14.114 Attached to 0000:00:13.0 00:19:14.114 Attached to 0000:00:12.0 00:19:14.114 Reset controller to setup AER completions for this process 00:19:14.114 Registering asynchronous event callbacks... 00:19:14.114 Getting orig temperature thresholds of all controllers 00:19:14.114 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:14.114 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:14.114 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:14.114 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:14.114 Setting all controllers temperature threshold low to trigger AER 00:19:14.114 Waiting for all controllers temperature threshold to be set lower 00:19:14.114 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:14.114 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:19:14.114 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:14.114 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:19:14.114 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:14.114 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:19:14.114 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:14.114 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:19:14.114 Waiting for all controllers to trigger AER and reset threshold 00:19:14.114 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:14.114 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:14.114 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:14.114 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:14.114 Cleaning up... 00:19:14.114 00:19:14.114 real 0m0.284s 00:19:14.114 user 0m0.106s 00:19:14.114 sys 0m0.133s 00:19:14.114 12:21:23 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:14.114 12:21:23 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:19:14.114 ************************************ 00:19:14.114 END TEST nvme_single_aen 00:19:14.114 ************************************ 00:19:14.114 12:21:23 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:14.114 12:21:23 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:19:14.114 12:21:23 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:14.114 12:21:23 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:14.114 12:21:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.114 ************************************ 00:19:14.114 START TEST nvme_doorbell_aers 00:19:14.114 ************************************ 00:19:14.114 12:21:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:19:14.114 12:21:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:19:14.114 12:21:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:19:14.114 12:21:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:19:14.114 12:21:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:19:14.114 12:21:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:14.114 12:21:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:19:14.114 12:21:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:14.114 12:21:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:14.114 12:21:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:14.371 12:21:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:19:14.371 12:21:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:19:14.371 12:21:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:14.371 12:21:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:14.629 [2024-07-10 12:21:23.894711] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:24.607 Executing: test_write_invalid_db 00:19:24.607 Waiting for AER completion... 00:19:24.607 Failure: test_write_invalid_db 00:19:24.607 00:19:24.607 Executing: test_invalid_db_write_overflow_sq 00:19:24.607 Waiting for AER completion... 00:19:24.607 Failure: test_invalid_db_write_overflow_sq 00:19:24.607 00:19:24.607 Executing: test_invalid_db_write_overflow_cq 00:19:24.607 Waiting for AER completion... 00:19:24.607 Failure: test_invalid_db_write_overflow_cq 00:19:24.607 00:19:24.607 12:21:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:24.607 12:21:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:19:24.607 [2024-07-10 12:21:33.940769] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:34.619 Executing: test_write_invalid_db 00:19:34.619 Waiting for AER completion... 00:19:34.619 Failure: test_write_invalid_db 00:19:34.619 00:19:34.619 Executing: test_invalid_db_write_overflow_sq 00:19:34.619 Waiting for AER completion... 00:19:34.619 Failure: test_invalid_db_write_overflow_sq 00:19:34.619 00:19:34.619 Executing: test_invalid_db_write_overflow_cq 00:19:34.619 Waiting for AER completion... 00:19:34.619 Failure: test_invalid_db_write_overflow_cq 00:19:34.619 00:19:34.619 12:21:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:34.619 12:21:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:19:34.619 [2024-07-10 12:21:43.989178] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:44.593 Executing: test_write_invalid_db 00:19:44.593 Waiting for AER completion... 00:19:44.593 Failure: test_write_invalid_db 00:19:44.593 00:19:44.593 Executing: test_invalid_db_write_overflow_sq 00:19:44.593 Waiting for AER completion... 00:19:44.593 Failure: test_invalid_db_write_overflow_sq 00:19:44.593 00:19:44.593 Executing: test_invalid_db_write_overflow_cq 00:19:44.593 Waiting for AER completion... 00:19:44.593 Failure: test_invalid_db_write_overflow_cq 00:19:44.593 00:19:44.593 12:21:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:44.593 12:21:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:19:44.593 [2024-07-10 12:21:54.049776] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:54.568 Executing: test_write_invalid_db 00:19:54.568 Waiting for AER completion... 00:19:54.568 Failure: test_write_invalid_db 00:19:54.568 00:19:54.568 Executing: test_invalid_db_write_overflow_sq 00:19:54.568 Waiting for AER completion... 00:19:54.568 Failure: test_invalid_db_write_overflow_sq 00:19:54.568 00:19:54.568 Executing: test_invalid_db_write_overflow_cq 00:19:54.568 Waiting for AER completion... 00:19:54.568 Failure: test_invalid_db_write_overflow_cq 00:19:54.568 00:19:54.568 ************************************ 00:19:54.568 END TEST nvme_doorbell_aers 00:19:54.568 ************************************ 00:19:54.568 00:19:54.568 real 0m40.304s 00:19:54.568 user 0m29.812s 00:19:54.568 sys 0m10.118s 00:19:54.568 12:22:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:54.568 12:22:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:19:54.568 12:22:03 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:54.568 12:22:03 nvme -- nvme/nvme.sh@97 -- # uname 00:19:54.568 12:22:03 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:19:54.568 12:22:03 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:19:54.568 12:22:03 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:19:54.568 12:22:03 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.568 12:22:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:54.568 ************************************ 00:19:54.568 START TEST nvme_multi_aen 00:19:54.568 ************************************ 00:19:54.568 12:22:03 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:19:54.826 [2024-07-10 12:22:04.128288] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:54.826 [2024-07-10 12:22:04.128418] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:54.826 [2024-07-10 12:22:04.128446] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:54.826 [2024-07-10 12:22:04.130013] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:54.826 [2024-07-10 12:22:04.130047] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:54.826 [2024-07-10 12:22:04.130062] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:54.826 [2024-07-10 12:22:04.131532] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:54.826 [2024-07-10 12:22:04.131717] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:54.826 [2024-07-10 12:22:04.131849] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:54.826 [2024-07-10 12:22:04.133200] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:54.826 [2024-07-10 12:22:04.133366] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:54.826 [2024-07-10 12:22:04.133470] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70276) is not found. Dropping the request. 00:19:54.826 Child process pid: 70796 00:19:55.086 [Child] Asynchronous Event Request test 00:19:55.086 [Child] Attached to 0000:00:10.0 00:19:55.086 [Child] Attached to 0000:00:11.0 00:19:55.086 [Child] Attached to 0000:00:13.0 00:19:55.086 [Child] Attached to 0000:00:12.0 00:19:55.086 [Child] Registering asynchronous event callbacks... 00:19:55.086 [Child] Getting orig temperature thresholds of all controllers 00:19:55.086 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.086 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.086 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.086 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.086 [Child] Waiting for all controllers to trigger AER and reset threshold 00:19:55.086 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.086 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.086 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.086 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.086 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.086 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.086 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.086 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.086 [Child] Cleaning up... 00:19:55.086 Asynchronous Event Request test 00:19:55.086 Attached to 0000:00:10.0 00:19:55.086 Attached to 0000:00:11.0 00:19:55.086 Attached to 0000:00:13.0 00:19:55.086 Attached to 0000:00:12.0 00:19:55.086 Reset controller to setup AER completions for this process 00:19:55.086 Registering asynchronous event callbacks... 00:19:55.086 Getting orig temperature thresholds of all controllers 00:19:55.086 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.086 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.086 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.086 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:55.086 Setting all controllers temperature threshold low to trigger AER 00:19:55.086 Waiting for all controllers temperature threshold to be set lower 00:19:55.086 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.086 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:19:55.086 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.086 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:19:55.086 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.086 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:19:55.086 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:55.086 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:19:55.086 Waiting for all controllers to trigger AER and reset threshold 00:19:55.086 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.086 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.086 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.086 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:55.086 Cleaning up... 00:19:55.086 00:19:55.086 real 0m0.635s 00:19:55.086 user 0m0.210s 00:19:55.086 sys 0m0.320s 00:19:55.086 12:22:04 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:55.086 12:22:04 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:19:55.086 ************************************ 00:19:55.086 END TEST nvme_multi_aen 00:19:55.086 ************************************ 00:19:55.346 12:22:04 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:55.346 12:22:04 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:19:55.346 12:22:04 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:55.346 12:22:04 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.346 12:22:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.346 ************************************ 00:19:55.346 START TEST nvme_startup 00:19:55.346 ************************************ 00:19:55.346 12:22:04 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:19:55.605 Initializing NVMe Controllers 00:19:55.605 Attached to 0000:00:10.0 00:19:55.605 Attached to 0000:00:11.0 00:19:55.605 Attached to 0000:00:13.0 00:19:55.605 Attached to 0000:00:12.0 00:19:55.605 Initialization complete. 00:19:55.605 Time used:201497.250 (us). 00:19:55.605 00:19:55.605 real 0m0.304s 00:19:55.605 user 0m0.106s 00:19:55.605 sys 0m0.154s 00:19:55.605 12:22:04 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:55.605 ************************************ 00:19:55.605 END TEST nvme_startup 00:19:55.605 ************************************ 00:19:55.605 12:22:04 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:19:55.605 12:22:04 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:55.605 12:22:04 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:19:55.605 12:22:04 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:55.605 12:22:04 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.605 12:22:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.605 ************************************ 00:19:55.605 START TEST nvme_multi_secondary 00:19:55.605 ************************************ 00:19:55.605 12:22:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:19:55.605 12:22:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=70852 00:19:55.605 12:22:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:19:55.605 12:22:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=70853 00:19:55.605 12:22:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:19:55.605 12:22:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:19:59.795 Initializing NVMe Controllers 00:19:59.795 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:59.795 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:59.795 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:59.796 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:59.796 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:19:59.796 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:19:59.796 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:19:59.796 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:19:59.796 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:19:59.796 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:19:59.796 Initialization complete. Launching workers. 00:19:59.796 ======================================================== 00:19:59.796 Latency(us) 00:19:59.796 Device Information : IOPS MiB/s Average min max 00:19:59.796 PCIE (0000:00:10.0) NSID 1 from core 1: 4756.48 18.58 3361.33 1900.43 8977.47 00:19:59.796 PCIE (0000:00:11.0) NSID 1 from core 1: 4756.48 18.58 3363.24 1947.82 8944.90 00:19:59.796 PCIE (0000:00:13.0) NSID 1 from core 1: 4756.48 18.58 3363.66 2000.51 8677.99 00:19:59.796 PCIE (0000:00:12.0) NSID 1 from core 1: 4756.48 18.58 3363.92 1898.11 8350.92 00:19:59.796 PCIE (0000:00:12.0) NSID 2 from core 1: 4756.48 18.58 3363.97 1917.45 8449.54 00:19:59.796 PCIE (0000:00:12.0) NSID 3 from core 1: 4756.48 18.58 3364.24 1737.14 8519.58 00:19:59.796 ======================================================== 00:19:59.796 Total : 28538.87 111.48 3363.39 1737.14 8977.47 00:19:59.796 00:19:59.796 Initializing NVMe Controllers 00:19:59.796 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:59.796 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:19:59.796 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:19:59.796 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:19:59.796 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:19:59.796 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:19:59.796 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:19:59.796 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:19:59.796 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:19:59.796 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:19:59.796 Initialization complete. Launching workers. 00:19:59.796 ======================================================== 00:19:59.796 Latency(us) 00:19:59.796 Device Information : IOPS MiB/s Average min max 00:19:59.796 PCIE (0000:00:10.0) NSID 1 from core 2: 2963.23 11.58 5398.24 1377.44 13881.34 00:19:59.796 PCIE (0000:00:11.0) NSID 1 from core 2: 2963.23 11.58 5399.24 1390.02 13449.59 00:19:59.796 PCIE (0000:00:13.0) NSID 1 from core 2: 2963.23 11.58 5396.92 1234.42 14085.37 00:19:59.796 PCIE (0000:00:12.0) NSID 1 from core 2: 2963.23 11.58 5392.02 1409.38 14289.97 00:19:59.796 PCIE (0000:00:12.0) NSID 2 from core 2: 2963.23 11.58 5391.06 1356.69 14012.42 00:19:59.796 PCIE (0000:00:12.0) NSID 3 from core 2: 2963.23 11.58 5391.39 1371.57 15025.54 00:19:59.796 ======================================================== 00:19:59.796 Total : 17779.40 69.45 5394.81 1234.42 15025.54 00:19:59.796 00:19:59.796 12:22:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 70852 00:20:01.170 Initializing NVMe Controllers 00:20:01.170 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:01.170 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:01.170 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:01.170 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:01.170 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:01.170 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:20:01.170 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:20:01.170 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:20:01.170 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:20:01.170 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:20:01.170 Initialization complete. Launching workers. 00:20:01.170 ======================================================== 00:20:01.170 Latency(us) 00:20:01.170 Device Information : IOPS MiB/s Average min max 00:20:01.170 PCIE (0000:00:10.0) NSID 1 from core 0: 8251.39 32.23 1937.52 926.25 10225.96 00:20:01.170 PCIE (0000:00:11.0) NSID 1 from core 0: 8251.39 32.23 1938.59 950.34 9579.13 00:20:01.170 PCIE (0000:00:13.0) NSID 1 from core 0: 8251.39 32.23 1938.57 917.05 9286.64 00:20:01.170 PCIE (0000:00:12.0) NSID 1 from core 0: 8251.39 32.23 1938.54 858.39 9792.90 00:20:01.170 PCIE (0000:00:12.0) NSID 2 from core 0: 8251.39 32.23 1938.51 779.77 10629.96 00:20:01.170 PCIE (0000:00:12.0) NSID 3 from core 0: 8254.59 32.24 1937.73 717.13 10603.92 00:20:01.170 ======================================================== 00:20:01.170 Total : 49511.53 193.40 1938.24 717.13 10629.96 00:20:01.170 00:20:01.170 12:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 70853 00:20:01.170 12:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=70922 00:20:01.170 12:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:20:01.170 12:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=70923 00:20:01.170 12:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:20:01.170 12:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:20:04.453 Initializing NVMe Controllers 00:20:04.453 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:04.453 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:04.453 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:04.453 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:04.453 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:20:04.453 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:20:04.453 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:20:04.453 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:20:04.453 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:20:04.453 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:20:04.453 Initialization complete. Launching workers. 00:20:04.453 ======================================================== 00:20:04.453 Latency(us) 00:20:04.453 Device Information : IOPS MiB/s Average min max 00:20:04.453 PCIE (0000:00:10.0) NSID 1 from core 1: 5218.41 20.38 3063.82 957.43 6927.14 00:20:04.453 PCIE (0000:00:11.0) NSID 1 from core 1: 5218.41 20.38 3065.88 958.41 6930.42 00:20:04.453 PCIE (0000:00:13.0) NSID 1 from core 1: 5218.41 20.38 3065.93 968.21 7697.16 00:20:04.453 PCIE (0000:00:12.0) NSID 1 from core 1: 5218.41 20.38 3066.00 986.04 7507.90 00:20:04.453 PCIE (0000:00:12.0) NSID 2 from core 1: 5218.41 20.38 3066.09 975.39 7671.23 00:20:04.453 PCIE (0000:00:12.0) NSID 3 from core 1: 5223.74 20.41 3063.04 976.04 7185.57 00:20:04.453 ======================================================== 00:20:04.453 Total : 31315.80 122.33 3065.13 957.43 7697.16 00:20:04.453 00:20:04.453 Initializing NVMe Controllers 00:20:04.453 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:04.453 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:04.453 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:04.453 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:04.453 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:04.453 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:20:04.453 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:20:04.453 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:20:04.453 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:20:04.453 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:20:04.453 Initialization complete. Launching workers. 00:20:04.453 ======================================================== 00:20:04.453 Latency(us) 00:20:04.453 Device Information : IOPS MiB/s Average min max 00:20:04.453 PCIE (0000:00:10.0) NSID 1 from core 0: 5044.79 19.71 3169.11 1022.78 7629.45 00:20:04.453 PCIE (0000:00:11.0) NSID 1 from core 0: 5044.79 19.71 3170.82 1026.65 7893.89 00:20:04.453 PCIE (0000:00:13.0) NSID 1 from core 0: 5044.79 19.71 3170.78 1015.99 7760.65 00:20:04.453 PCIE (0000:00:12.0) NSID 1 from core 0: 5044.79 19.71 3170.72 997.84 7342.21 00:20:04.453 PCIE (0000:00:12.0) NSID 2 from core 0: 5044.79 19.71 3170.65 937.22 7208.88 00:20:04.453 PCIE (0000:00:12.0) NSID 3 from core 0: 5044.79 19.71 3170.61 893.36 7511.89 00:20:04.453 ======================================================== 00:20:04.453 Total : 30268.73 118.24 3170.45 893.36 7893.89 00:20:04.453 00:20:06.358 Initializing NVMe Controllers 00:20:06.358 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:06.358 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:06.358 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:06.358 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:06.358 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:20:06.358 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:20:06.358 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:20:06.358 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:20:06.358 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:20:06.358 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:20:06.358 Initialization complete. Launching workers. 00:20:06.358 ======================================================== 00:20:06.358 Latency(us) 00:20:06.358 Device Information : IOPS MiB/s Average min max 00:20:06.358 PCIE (0000:00:10.0) NSID 1 from core 2: 3178.63 12.42 5032.36 1054.44 12940.45 00:20:06.358 PCIE (0000:00:11.0) NSID 1 from core 2: 3178.63 12.42 5033.22 1079.19 13138.53 00:20:06.358 PCIE (0000:00:13.0) NSID 1 from core 2: 3178.63 12.42 5033.15 1041.10 13919.40 00:20:06.358 PCIE (0000:00:12.0) NSID 1 from core 2: 3178.63 12.42 5032.57 1050.62 12932.60 00:20:06.358 PCIE (0000:00:12.0) NSID 2 from core 2: 3178.63 12.42 5033.00 1066.36 13201.22 00:20:06.358 PCIE (0000:00:12.0) NSID 3 from core 2: 3178.63 12.42 5032.90 1076.66 12660.91 00:20:06.358 ======================================================== 00:20:06.358 Total : 19071.80 74.50 5032.87 1041.10 13919.40 00:20:06.358 00:20:06.358 12:22:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 70922 00:20:06.358 12:22:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 70923 00:20:06.358 00:20:06.358 real 0m10.781s 00:20:06.358 user 0m18.513s 00:20:06.358 sys 0m0.972s 00:20:06.358 ************************************ 00:20:06.358 END TEST nvme_multi_secondary 00:20:06.358 ************************************ 00:20:06.358 12:22:15 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:06.358 12:22:15 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:20:06.358 12:22:15 nvme -- common/autotest_common.sh@1142 -- # return 0 00:20:06.358 12:22:15 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:20:06.358 12:22:15 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:20:06.358 12:22:15 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/69861 ]] 00:20:06.358 12:22:15 nvme -- common/autotest_common.sh@1088 -- # kill 69861 00:20:06.358 12:22:15 nvme -- common/autotest_common.sh@1089 -- # wait 69861 00:20:06.358 [2024-07-10 12:22:15.819093] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.820693] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.820796] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.820846] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.827137] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.827513] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.827914] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.828384] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.832444] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.832826] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.833151] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.833535] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.837448] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.837623] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.837857] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.358 [2024-07-10 12:22:15.838064] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70795) is not found. Dropping the request. 00:20:06.926 [2024-07-10 12:22:16.103401] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:20:06.926 12:22:16 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:20:06.926 12:22:16 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:20:06.926 12:22:16 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:20:06.926 12:22:16 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:06.926 12:22:16 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.926 12:22:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:06.926 ************************************ 00:20:06.926 START TEST bdev_nvme_reset_stuck_adm_cmd 00:20:06.926 ************************************ 00:20:06.926 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:20:06.926 * Looking for test storage... 00:20:06.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:06.926 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:20:06.926 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:20:06.926 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:20:06.926 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:20:06.926 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:20:06.926 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:20:06.926 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:20:06.926 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:20:06.926 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=71078 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 71078 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 71078 ']' 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.927 12:22:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:07.185 [2024-07-10 12:22:16.479459] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:20:07.186 [2024-07-10 12:22:16.479789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71078 ] 00:20:07.444 [2024-07-10 12:22:16.685298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:07.702 [2024-07-10 12:22:16.985990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.702 [2024-07-10 12:22:16.986113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.702 [2024-07-10 12:22:16.986198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.702 [2024-07-10 12:22:16.986231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:08.638 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.638 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:20:08.638 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:20:08.638 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.638 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:08.896 nvme0n1 00:20:08.896 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.896 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:20:08.896 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_w0pc0.txt 00:20:08.896 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:20:08.896 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.896 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:08.896 true 00:20:08.896 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.896 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:20:08.896 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720614138 00:20:08.896 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=71112 00:20:08.896 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:08.896 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:20:08.896 12:22:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:20:10.818 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:10.818 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.818 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:10.818 [2024-07-10 12:22:20.211306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:20:10.818 [2024-07-10 12:22:20.211867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:10.818 [2024-07-10 12:22:20.212007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:10.818 [2024-07-10 12:22:20.212039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.818 [2024-07-10 12:22:20.213969] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:10.818 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 71112 00:20:10.818 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.818 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 71112 00:20:10.818 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 71112 00:20:10.818 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:20:10.818 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:20:10.818 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.818 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.818 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:10.818 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.818 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:20:10.818 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_w0pc0.txt 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_w0pc0.txt 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 71078 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 71078 ']' 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 71078 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71078 00:20:11.077 killing process with pid 71078 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71078' 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 71078 00:20:11.077 12:22:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 71078 00:20:14.364 12:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:20:14.364 12:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:20:14.364 ************************************ 00:20:14.364 END TEST bdev_nvme_reset_stuck_adm_cmd 00:20:14.364 ************************************ 00:20:14.364 00:20:14.364 real 0m7.021s 00:20:14.364 user 0m23.712s 00:20:14.364 sys 0m0.892s 00:20:14.364 12:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:14.364 12:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:20:14.364 12:22:23 nvme -- common/autotest_common.sh@1142 -- # return 0 00:20:14.364 12:22:23 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:20:14.364 12:22:23 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:20:14.364 12:22:23 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:14.364 12:22:23 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:14.364 12:22:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:14.364 ************************************ 00:20:14.364 START TEST nvme_fio 00:20:14.364 ************************************ 00:20:14.364 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:20:14.364 12:22:23 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:14.364 12:22:23 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:20:14.364 12:22:23 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:20:14.364 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:20:14.364 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:20:14.364 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:14.364 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:14.364 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:20:14.364 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:20:14.364 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:14.364 12:22:23 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:20:14.364 12:22:23 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:20:14.364 12:22:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:14.364 12:22:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:14.364 12:22:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:14.364 12:22:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:14.364 12:22:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:14.622 12:22:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:14.622 12:22:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:14.622 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:14.622 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:14.622 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:14.622 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:14.622 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:14.622 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:20:14.622 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:14.622 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:14.622 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:14.622 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:20:14.623 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:14.623 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:14.623 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:14.623 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:20:14.623 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:14.623 12:22:23 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:20:14.881 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:14.881 fio-3.35 00:20:14.881 Starting 1 thread 00:20:18.163 00:20:18.163 test: (groupid=0, jobs=1): err= 0: pid=71268: Wed Jul 10 12:22:27 2024 00:20:18.163 read: IOPS=21.4k, BW=83.7MiB/s (87.8MB/s)(168MiB/2001msec) 00:20:18.163 slat (nsec): min=4366, max=59064, avg=5455.90, stdev=1305.24 00:20:18.163 clat (usec): min=207, max=11021, avg=2980.44, stdev=335.35 00:20:18.163 lat (usec): min=213, max=11080, avg=2985.89, stdev=335.70 00:20:18.163 clat percentiles (usec): 00:20:18.163 | 1.00th=[ 2212], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2868], 00:20:18.163 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2933], 60.00th=[ 2966], 00:20:18.163 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3130], 95.00th=[ 3261], 00:20:18.163 | 99.00th=[ 4293], 99.50th=[ 4948], 99.90th=[ 6063], 99.95th=[ 8586], 00:20:18.163 | 99.99th=[10814] 00:20:18.163 bw ( KiB/s): min=83744, max=86808, per=98.89%, avg=84781.33, stdev=1755.31, samples=3 00:20:18.163 iops : min=20936, max=21704, avg=21196.00, stdev=439.98, samples=3 00:20:18.163 write: IOPS=21.3k, BW=83.1MiB/s (87.1MB/s)(166MiB/2001msec); 0 zone resets 00:20:18.163 slat (nsec): min=4457, max=29996, avg=5618.53, stdev=1224.70 00:20:18.163 clat (usec): min=259, max=10901, avg=2985.28, stdev=345.12 00:20:18.163 lat (usec): min=264, max=10923, avg=2990.90, stdev=345.44 00:20:18.163 clat percentiles (usec): 00:20:18.163 | 1.00th=[ 2180], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2868], 00:20:18.163 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2933], 60.00th=[ 2966], 00:20:18.163 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3130], 95.00th=[ 3294], 00:20:18.163 | 99.00th=[ 4293], 99.50th=[ 4948], 99.90th=[ 6849], 99.95th=[ 8717], 00:20:18.163 | 99.99th=[10552] 00:20:18.163 bw ( KiB/s): min=83728, max=87144, per=99.80%, avg=84898.67, stdev=1945.11, samples=3 00:20:18.163 iops : min=20932, max=21786, avg=21224.67, stdev=486.28, samples=3 00:20:18.163 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:20:18.163 lat (msec) : 2=0.67%, 4=97.62%, 10=1.65%, 20=0.02% 00:20:18.163 cpu : usr=99.35%, sys=0.10%, ctx=3, majf=0, minf=606 00:20:18.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:18.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:18.163 issued rwts: total=42886,42555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:18.163 00:20:18.163 Run status group 0 (all jobs): 00:20:18.163 READ: bw=83.7MiB/s (87.8MB/s), 83.7MiB/s-83.7MiB/s (87.8MB/s-87.8MB/s), io=168MiB (176MB), run=2001-2001msec 00:20:18.163 WRITE: bw=83.1MiB/s (87.1MB/s), 83.1MiB/s-83.1MiB/s (87.1MB/s-87.1MB/s), io=166MiB (174MB), run=2001-2001msec 00:20:18.421 ----------------------------------------------------- 00:20:18.421 Suppressions used: 00:20:18.421 count bytes template 00:20:18.421 1 32 /usr/src/fio/parse.c 00:20:18.421 1 8 libtcmalloc_minimal.so 00:20:18.421 ----------------------------------------------------- 00:20:18.421 00:20:18.421 12:22:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:18.421 12:22:27 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:18.421 12:22:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:18.421 12:22:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:20:18.680 12:22:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:20:18.680 12:22:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:18.938 12:22:28 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:18.938 12:22:28 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:18.938 12:22:28 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:20:19.195 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:19.195 fio-3.35 00:20:19.195 Starting 1 thread 00:20:23.373 00:20:23.373 test: (groupid=0, jobs=1): err= 0: pid=71333: Wed Jul 10 12:22:32 2024 00:20:23.373 read: IOPS=21.8k, BW=85.0MiB/s (89.2MB/s)(170MiB/2001msec) 00:20:23.373 slat (nsec): min=4441, max=46932, avg=5429.36, stdev=1136.55 00:20:23.373 clat (usec): min=225, max=10756, avg=2933.76, stdev=243.45 00:20:23.373 lat (usec): min=230, max=10803, avg=2939.19, stdev=243.88 00:20:23.373 clat percentiles (usec): 00:20:23.373 | 1.00th=[ 2737], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2868], 00:20:23.373 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:20:23.373 | 70.00th=[ 2966], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3064], 00:20:23.373 | 99.00th=[ 3392], 99.50th=[ 4047], 99.90th=[ 5669], 99.95th=[ 8356], 00:20:23.373 | 99.99th=[10421] 00:20:23.373 bw ( KiB/s): min=84528, max=87528, per=99.22%, avg=86394.67, stdev=1628.91, samples=3 00:20:23.373 iops : min=21132, max=21882, avg=21598.67, stdev=407.23, samples=3 00:20:23.373 write: IOPS=21.6k, BW=84.4MiB/s (88.5MB/s)(169MiB/2001msec); 0 zone resets 00:20:23.373 slat (nsec): min=4582, max=39847, avg=5593.96, stdev=1119.64 00:20:23.373 clat (usec): min=195, max=10584, avg=2937.43, stdev=252.25 00:20:23.373 lat (usec): min=201, max=10612, avg=2943.02, stdev=252.65 00:20:23.373 clat percentiles (usec): 00:20:23.373 | 1.00th=[ 2737], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2868], 00:20:23.373 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2933], 00:20:23.374 | 70.00th=[ 2966], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3064], 00:20:23.374 | 99.00th=[ 3490], 99.50th=[ 4080], 99.90th=[ 6456], 99.95th=[ 8586], 00:20:23.374 | 99.99th=[10159] 00:20:23.374 bw ( KiB/s): min=84448, max=88000, per=100.00%, avg=86589.33, stdev=1885.36, samples=3 00:20:23.374 iops : min=21112, max=22000, avg=21647.33, stdev=471.34, samples=3 00:20:23.374 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:20:23.374 lat (msec) : 2=0.05%, 4=99.33%, 10=0.56%, 20=0.01% 00:20:23.374 cpu : usr=99.20%, sys=0.25%, ctx=3, majf=0, minf=605 00:20:23.374 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:23.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:23.374 issued rwts: total=43558,43248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:23.374 00:20:23.374 Run status group 0 (all jobs): 00:20:23.374 READ: bw=85.0MiB/s (89.2MB/s), 85.0MiB/s-85.0MiB/s (89.2MB/s-89.2MB/s), io=170MiB (178MB), run=2001-2001msec 00:20:23.374 WRITE: bw=84.4MiB/s (88.5MB/s), 84.4MiB/s-84.4MiB/s (88.5MB/s-88.5MB/s), io=169MiB (177MB), run=2001-2001msec 00:20:23.374 ----------------------------------------------------- 00:20:23.374 Suppressions used: 00:20:23.374 count bytes template 00:20:23.374 1 32 /usr/src/fio/parse.c 00:20:23.374 1 8 libtcmalloc_minimal.so 00:20:23.374 ----------------------------------------------------- 00:20:23.374 00:20:23.374 12:22:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:23.374 12:22:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:23.374 12:22:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:20:23.374 12:22:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:23.374 12:22:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:20:23.374 12:22:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:23.632 12:22:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:23.632 12:22:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:23.632 12:22:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:20:23.890 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:23.890 fio-3.35 00:20:23.890 Starting 1 thread 00:20:28.126 00:20:28.126 test: (groupid=0, jobs=1): err= 0: pid=71394: Wed Jul 10 12:22:36 2024 00:20:28.126 read: IOPS=21.8k, BW=85.2MiB/s (89.4MB/s)(171MiB/2001msec) 00:20:28.126 slat (usec): min=4, max=107, avg= 5.24, stdev= 1.17 00:20:28.126 clat (usec): min=230, max=11068, avg=2926.08, stdev=258.82 00:20:28.126 lat (usec): min=234, max=11175, avg=2931.32, stdev=259.27 00:20:28.126 clat percentiles (usec): 00:20:28.126 | 1.00th=[ 2737], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2868], 00:20:28.126 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:20:28.126 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 2999], 95.00th=[ 3032], 00:20:28.126 | 99.00th=[ 3589], 99.50th=[ 4228], 99.90th=[ 5866], 99.95th=[ 8455], 00:20:28.126 | 99.99th=[10683] 00:20:28.126 bw ( KiB/s): min=84416, max=88216, per=99.09%, avg=86482.67, stdev=1921.80, samples=3 00:20:28.126 iops : min=21104, max=22054, avg=21620.67, stdev=480.45, samples=3 00:20:28.126 write: IOPS=21.7k, BW=84.6MiB/s (88.7MB/s)(169MiB/2001msec); 0 zone resets 00:20:28.126 slat (nsec): min=4541, max=26799, avg=5481.46, stdev=1098.40 00:20:28.126 clat (usec): min=238, max=10891, avg=2930.57, stdev=262.71 00:20:28.126 lat (usec): min=243, max=10913, avg=2936.05, stdev=263.10 00:20:28.126 clat percentiles (usec): 00:20:28.126 | 1.00th=[ 2737], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2868], 00:20:28.126 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:20:28.126 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 2999], 95.00th=[ 3032], 00:20:28.126 | 99.00th=[ 3687], 99.50th=[ 4228], 99.90th=[ 6587], 99.95th=[ 8717], 00:20:28.126 | 99.99th=[10552] 00:20:28.126 bw ( KiB/s): min=84304, max=88024, per=100.00%, avg=86685.33, stdev=2067.60, samples=3 00:20:28.126 iops : min=21076, max=22006, avg=21671.33, stdev=516.90, samples=3 00:20:28.126 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:20:28.126 lat (msec) : 2=0.05%, 4=99.13%, 10=0.76%, 20=0.02% 00:20:28.126 cpu : usr=99.15%, sys=0.30%, ctx=2, majf=0, minf=606 00:20:28.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:28.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:28.126 issued rwts: total=43660,43350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:28.126 00:20:28.126 Run status group 0 (all jobs): 00:20:28.126 READ: bw=85.2MiB/s (89.4MB/s), 85.2MiB/s-85.2MiB/s (89.4MB/s-89.4MB/s), io=171MiB (179MB), run=2001-2001msec 00:20:28.126 WRITE: bw=84.6MiB/s (88.7MB/s), 84.6MiB/s-84.6MiB/s (88.7MB/s-88.7MB/s), io=169MiB (178MB), run=2001-2001msec 00:20:28.126 ----------------------------------------------------- 00:20:28.126 Suppressions used: 00:20:28.126 count bytes template 00:20:28.126 1 32 /usr/src/fio/parse.c 00:20:28.126 1 8 libtcmalloc_minimal.so 00:20:28.126 ----------------------------------------------------- 00:20:28.126 00:20:28.126 12:22:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:28.126 12:22:37 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:20:28.126 12:22:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:20:28.126 12:22:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:20:28.126 12:22:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:20:28.126 12:22:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:20:28.385 12:22:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:20:28.385 12:22:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:28.385 12:22:37 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:20:28.385 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:28.385 fio-3.35 00:20:28.386 Starting 1 thread 00:20:32.569 00:20:32.570 test: (groupid=0, jobs=1): err= 0: pid=71455: Wed Jul 10 12:22:41 2024 00:20:32.570 read: IOPS=20.8k, BW=81.3MiB/s (85.2MB/s)(163MiB/2001msec) 00:20:32.570 slat (nsec): min=4412, max=84976, avg=5567.78, stdev=1301.67 00:20:32.570 clat (usec): min=205, max=10549, avg=3064.79, stdev=493.40 00:20:32.570 lat (usec): min=210, max=10590, avg=3070.36, stdev=493.84 00:20:32.570 clat percentiles (usec): 00:20:32.570 | 1.00th=[ 1762], 5.00th=[ 2737], 10.00th=[ 2835], 20.00th=[ 2900], 00:20:32.570 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:20:32.570 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3326], 95.00th=[ 3589], 00:20:32.570 | 99.00th=[ 4948], 99.50th=[ 5800], 99.90th=[ 8356], 99.95th=[ 9372], 00:20:32.570 | 99.99th=[10421] 00:20:32.570 bw ( KiB/s): min=82448, max=85312, per=100.00%, avg=83469.33, stdev=1598.93, samples=3 00:20:32.570 iops : min=20612, max=21328, avg=20867.33, stdev=399.73, samples=3 00:20:32.570 write: IOPS=20.7k, BW=81.0MiB/s (84.9MB/s)(162MiB/2001msec); 0 zone resets 00:20:32.570 slat (nsec): min=4516, max=67988, avg=5715.09, stdev=1320.60 00:20:32.570 clat (usec): min=226, max=10465, avg=3067.58, stdev=516.79 00:20:32.570 lat (usec): min=232, max=10483, avg=3073.29, stdev=517.28 00:20:32.570 clat percentiles (usec): 00:20:32.570 | 1.00th=[ 1762], 5.00th=[ 2737], 10.00th=[ 2835], 20.00th=[ 2900], 00:20:32.570 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:20:32.570 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3326], 95.00th=[ 3589], 00:20:32.570 | 99.00th=[ 5014], 99.50th=[ 6259], 99.90th=[ 8455], 99.95th=[ 9372], 00:20:32.570 | 99.99th=[10290] 00:20:32.570 bw ( KiB/s): min=82584, max=85376, per=100.00%, avg=83549.33, stdev=1582.79, samples=3 00:20:32.570 iops : min=20646, max=21344, avg=20887.33, stdev=395.70, samples=3 00:20:32.570 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:20:32.570 lat (msec) : 2=1.69%, 4=95.29%, 10=2.95%, 20=0.02% 00:20:32.570 cpu : usr=99.45%, sys=0.00%, ctx=4, majf=0, minf=603 00:20:32.570 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:32.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:32.570 issued rwts: total=41639,41473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:32.570 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:32.570 00:20:32.570 Run status group 0 (all jobs): 00:20:32.570 READ: bw=81.3MiB/s (85.2MB/s), 81.3MiB/s-81.3MiB/s (85.2MB/s-85.2MB/s), io=163MiB (171MB), run=2001-2001msec 00:20:32.570 WRITE: bw=81.0MiB/s (84.9MB/s), 81.0MiB/s-81.0MiB/s (84.9MB/s-84.9MB/s), io=162MiB (170MB), run=2001-2001msec 00:20:32.570 ----------------------------------------------------- 00:20:32.570 Suppressions used: 00:20:32.570 count bytes template 00:20:32.570 1 32 /usr/src/fio/parse.c 00:20:32.570 1 8 libtcmalloc_minimal.so 00:20:32.570 ----------------------------------------------------- 00:20:32.570 00:20:32.570 12:22:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:20:32.570 12:22:41 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:20:32.570 00:20:32.570 real 0m18.331s 00:20:32.570 user 0m14.517s 00:20:32.570 sys 0m2.907s 00:20:32.570 12:22:41 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:32.570 12:22:41 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:20:32.570 ************************************ 00:20:32.570 END TEST nvme_fio 00:20:32.570 ************************************ 00:20:32.570 12:22:41 nvme -- common/autotest_common.sh@1142 -- # return 0 00:20:32.570 00:20:32.570 real 1m33.592s 00:20:32.570 user 3m43.828s 00:20:32.570 sys 0m20.907s 00:20:32.570 12:22:41 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:32.570 12:22:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:32.570 ************************************ 00:20:32.570 END TEST nvme 00:20:32.570 ************************************ 00:20:32.570 12:22:41 -- common/autotest_common.sh@1142 -- # return 0 00:20:32.570 12:22:41 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:20:32.570 12:22:41 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:20:32.570 12:22:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:32.570 12:22:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:32.570 12:22:41 -- common/autotest_common.sh@10 -- # set +x 00:20:32.570 ************************************ 00:20:32.570 START TEST nvme_scc 00:20:32.570 ************************************ 00:20:32.570 12:22:41 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:20:32.570 * Looking for test storage... 00:20:32.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:32.570 12:22:41 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:32.570 12:22:41 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:32.570 12:22:41 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:20:32.570 12:22:41 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:32.570 12:22:41 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:32.570 12:22:41 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.570 12:22:41 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.570 12:22:41 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.570 12:22:41 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.570 12:22:41 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.570 12:22:41 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.570 12:22:41 nvme_scc -- paths/export.sh@5 -- # export PATH 00:20:32.570 12:22:41 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.570 12:22:41 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:20:32.570 12:22:41 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:20:32.570 12:22:41 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:20:32.570 12:22:41 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:20:32.570 12:22:41 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:20:32.570 12:22:41 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:20:32.570 12:22:41 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:20:32.570 12:22:41 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:20:32.570 12:22:41 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:20:32.570 12:22:41 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:32.570 12:22:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:20:32.570 12:22:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:20:32.570 12:22:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:20:32.570 12:22:41 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:33.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:33.395 Waiting for block devices as requested 00:20:33.395 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:33.395 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:33.676 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:33.676 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:38.956 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:38.956 12:22:48 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:20:38.956 12:22:48 nvme_scc -- scripts/common.sh@15 -- # local i 00:20:38.956 12:22:48 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:38.956 12:22:48 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:38.956 12:22:48 nvme_scc -- scripts/common.sh@24 -- # return 0 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:38.956 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:20:38.957 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:20:38.958 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:20:38.959 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:20:38.960 12:22:48 nvme_scc -- scripts/common.sh@15 -- # local i 00:20:38.960 12:22:48 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:38.960 12:22:48 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:38.960 12:22:48 nvme_scc -- scripts/common.sh@24 -- # return 0 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.960 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:20:38.961 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:20:38.962 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.963 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:20:38.964 12:22:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:20:38.965 12:22:48 nvme_scc -- scripts/common.sh@15 -- # local i 00:20:38.965 12:22:48 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:20:38.965 12:22:48 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:38.965 12:22:48 nvme_scc -- scripts/common.sh@24 -- # return 0 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.965 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.966 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:38.967 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:20:38.968 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.231 12:22:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:20:39.232 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:20:39.233 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:20:39.234 12:22:48 nvme_scc -- scripts/common.sh@15 -- # local i 00:20:39.234 12:22:48 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:20:39.234 12:22:48 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:39.234 12:22:48 nvme_scc -- scripts/common.sh@24 -- # return 0 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@18 -- # shift 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.234 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.235 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.236 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:20:39.237 12:22:48 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:39.237 12:22:48 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:20:39.238 12:22:48 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:20:39.238 12:22:48 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:20:39.238 12:22:48 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:20:39.238 12:22:48 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:40.179 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:40.745 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:40.746 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:40.746 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:40.746 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:41.004 12:22:50 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:20:41.004 12:22:50 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:41.004 12:22:50 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:41.004 12:22:50 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:20:41.004 ************************************ 00:20:41.004 START TEST nvme_simple_copy 00:20:41.004 ************************************ 00:20:41.004 12:22:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:20:41.262 Initializing NVMe Controllers 00:20:41.262 Attaching to 0000:00:10.0 00:20:41.262 Controller supports SCC. Attached to 0000:00:10.0 00:20:41.262 Namespace ID: 1 size: 6GB 00:20:41.262 Initialization complete. 00:20:41.262 00:20:41.263 Controller QEMU NVMe Ctrl (12340 ) 00:20:41.263 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:20:41.263 Namespace Block Size:4096 00:20:41.263 Writing LBAs 0 to 63 with Random Data 00:20:41.263 Copied LBAs from 0 - 63 to the Destination LBA 256 00:20:41.263 LBAs matching Written Data: 64 00:20:41.263 00:20:41.263 real 0m0.312s 00:20:41.263 user 0m0.110s 00:20:41.263 sys 0m0.100s 00:20:41.263 12:22:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:41.263 12:22:50 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:20:41.263 ************************************ 00:20:41.263 END TEST nvme_simple_copy 00:20:41.263 ************************************ 00:20:41.263 12:22:50 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:20:41.263 00:20:41.263 real 0m8.934s 00:20:41.263 user 0m1.455s 00:20:41.263 sys 0m2.560s 00:20:41.263 ************************************ 00:20:41.263 END TEST nvme_scc 00:20:41.263 ************************************ 00:20:41.263 12:22:50 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:41.263 12:22:50 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:20:41.263 12:22:50 -- common/autotest_common.sh@1142 -- # return 0 00:20:41.263 12:22:50 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:20:41.263 12:22:50 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:20:41.263 12:22:50 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:20:41.263 12:22:50 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:20:41.263 12:22:50 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:20:41.263 12:22:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:41.263 12:22:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:41.263 12:22:50 -- common/autotest_common.sh@10 -- # set +x 00:20:41.263 ************************************ 00:20:41.263 START TEST nvme_fdp 00:20:41.263 ************************************ 00:20:41.263 12:22:50 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:20:41.522 * Looking for test storage... 00:20:41.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:41.522 12:22:50 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:41.522 12:22:50 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:20:41.522 12:22:50 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:20:41.522 12:22:50 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:41.522 12:22:50 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:41.522 12:22:50 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.522 12:22:50 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.522 12:22:50 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.522 12:22:50 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.522 12:22:50 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.522 12:22:50 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.522 12:22:50 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:20:41.522 12:22:50 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.522 12:22:50 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:20:41.522 12:22:50 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:20:41.522 12:22:50 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:20:41.522 12:22:50 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:20:41.522 12:22:50 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:20:41.522 12:22:50 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:20:41.522 12:22:50 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:20:41.522 12:22:50 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:20:41.522 12:22:50 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:20:41.522 12:22:50 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:41.522 12:22:50 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:42.089 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:42.360 Waiting for block devices as requested 00:20:42.360 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:42.360 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:42.618 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:42.618 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:47.894 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:47.894 12:22:57 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:20:47.894 12:22:57 nvme_fdp -- scripts/common.sh@15 -- # local i 00:20:47.894 12:22:57 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:47.894 12:22:57 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:47.894 12:22:57 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.894 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:20:47.895 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:20:47.896 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.897 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:20:47.898 12:22:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:20:47.898 12:22:57 nvme_fdp -- scripts/common.sh@15 -- # local i 00:20:47.898 12:22:57 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:47.899 12:22:57 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:47.899 12:22:57 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.899 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:20:47.900 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.901 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.902 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:20:47.903 12:22:57 nvme_fdp -- scripts/common.sh@15 -- # local i 00:20:47.903 12:22:57 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:20:47.903 12:22:57 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:47.903 12:22:57 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:20:47.903 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:20:47.904 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.905 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.906 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:47.907 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:48.170 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:48.170 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:48.170 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.170 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.170 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:48.170 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:48.170 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:48.170 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.170 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.170 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:48.170 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.171 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:20:48.172 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:20:48.173 12:22:57 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:20:48.174 12:22:57 nvme_fdp -- scripts/common.sh@15 -- # local i 00:20:48.174 12:22:57 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:20:48.174 12:22:57 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:48.174 12:22:57 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:20:48.174 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.175 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:20:48.176 12:22:57 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:20:48.176 12:22:57 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:20:48.177 12:22:57 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:20:48.177 12:22:57 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:20:48.177 12:22:57 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:20:48.177 12:22:57 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:48.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:49.679 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:49.680 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:49.680 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:49.680 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:49.680 12:22:59 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:20:49.680 12:22:59 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:49.680 12:22:59 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:49.680 12:22:59 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:20:49.680 ************************************ 00:20:49.680 START TEST nvme_flexible_data_placement 00:20:49.680 ************************************ 00:20:49.680 12:22:59 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:20:50.247 Initializing NVMe Controllers 00:20:50.247 Attaching to 0000:00:13.0 00:20:50.247 Controller supports FDP Attached to 0000:00:13.0 00:20:50.247 Namespace ID: 1 Endurance Group ID: 1 00:20:50.247 Initialization complete. 00:20:50.247 00:20:50.247 ================================== 00:20:50.247 == FDP tests for Namespace: #01 == 00:20:50.247 ================================== 00:20:50.247 00:20:50.247 Get Feature: FDP: 00:20:50.247 ================= 00:20:50.247 Enabled: Yes 00:20:50.247 FDP configuration Index: 0 00:20:50.247 00:20:50.247 FDP configurations log page 00:20:50.247 =========================== 00:20:50.247 Number of FDP configurations: 1 00:20:50.247 Version: 0 00:20:50.247 Size: 112 00:20:50.247 FDP Configuration Descriptor: 0 00:20:50.247 Descriptor Size: 96 00:20:50.247 Reclaim Group Identifier format: 2 00:20:50.247 FDP Volatile Write Cache: Not Present 00:20:50.247 FDP Configuration: Valid 00:20:50.247 Vendor Specific Size: 0 00:20:50.247 Number of Reclaim Groups: 2 00:20:50.247 Number of Recalim Unit Handles: 8 00:20:50.247 Max Placement Identifiers: 128 00:20:50.247 Number of Namespaces Suppprted: 256 00:20:50.247 Reclaim unit Nominal Size: 6000000 bytes 00:20:50.247 Estimated Reclaim Unit Time Limit: Not Reported 00:20:50.247 RUH Desc #000: RUH Type: Initially Isolated 00:20:50.247 RUH Desc #001: RUH Type: Initially Isolated 00:20:50.247 RUH Desc #002: RUH Type: Initially Isolated 00:20:50.247 RUH Desc #003: RUH Type: Initially Isolated 00:20:50.247 RUH Desc #004: RUH Type: Initially Isolated 00:20:50.247 RUH Desc #005: RUH Type: Initially Isolated 00:20:50.247 RUH Desc #006: RUH Type: Initially Isolated 00:20:50.247 RUH Desc #007: RUH Type: Initially Isolated 00:20:50.247 00:20:50.247 FDP reclaim unit handle usage log page 00:20:50.247 ====================================== 00:20:50.247 Number of Reclaim Unit Handles: 8 00:20:50.247 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:20:50.247 RUH Usage Desc #001: RUH Attributes: Unused 00:20:50.247 RUH Usage Desc #002: RUH Attributes: Unused 00:20:50.247 RUH Usage Desc #003: RUH Attributes: Unused 00:20:50.247 RUH Usage Desc #004: RUH Attributes: Unused 00:20:50.247 RUH Usage Desc #005: RUH Attributes: Unused 00:20:50.247 RUH Usage Desc #006: RUH Attributes: Unused 00:20:50.247 RUH Usage Desc #007: RUH Attributes: Unused 00:20:50.247 00:20:50.247 FDP statistics log page 00:20:50.247 ======================= 00:20:50.247 Host bytes with metadata written: 906018816 00:20:50.247 Media bytes with metadata written: 906190848 00:20:50.247 Media bytes erased: 0 00:20:50.247 00:20:50.247 FDP Reclaim unit handle status 00:20:50.247 ============================== 00:20:50.247 Number of RUHS descriptors: 2 00:20:50.247 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005ff4 00:20:50.247 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:20:50.247 00:20:50.247 FDP write on placement id: 0 success 00:20:50.247 00:20:50.248 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:20:50.248 00:20:50.248 IO mgmt send: RUH update for Placement ID: #0 Success 00:20:50.248 00:20:50.248 Get Feature: FDP Events for Placement handle: #0 00:20:50.248 ======================== 00:20:50.248 Number of FDP Events: 6 00:20:50.248 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:20:50.248 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:20:50.248 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:20:50.248 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:20:50.248 FDP Event: #4 Type: Media Reallocated Enabled: No 00:20:50.248 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:20:50.248 00:20:50.248 FDP events log page 00:20:50.248 =================== 00:20:50.248 Number of FDP events: 1 00:20:50.248 FDP Event #0: 00:20:50.248 Event Type: RU Not Written to Capacity 00:20:50.248 Placement Identifier: Valid 00:20:50.248 NSID: Valid 00:20:50.248 Location: Valid 00:20:50.248 Placement Identifier: 0 00:20:50.248 Event Timestamp: 8 00:20:50.248 Namespace Identifier: 1 00:20:50.248 Reclaim Group Identifier: 0 00:20:50.248 Reclaim Unit Handle Identifier: 0 00:20:50.248 00:20:50.248 FDP test passed 00:20:50.248 00:20:50.248 real 0m0.294s 00:20:50.248 user 0m0.083s 00:20:50.248 sys 0m0.110s 00:20:50.248 12:22:59 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:50.248 12:22:59 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:20:50.248 ************************************ 00:20:50.248 END TEST nvme_flexible_data_placement 00:20:50.248 ************************************ 00:20:50.248 12:22:59 nvme_fdp -- common/autotest_common.sh@1142 -- # return 0 00:20:50.248 00:20:50.248 real 0m8.807s 00:20:50.248 user 0m1.307s 00:20:50.248 sys 0m2.551s 00:20:50.248 12:22:59 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:50.248 12:22:59 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:20:50.248 ************************************ 00:20:50.248 END TEST nvme_fdp 00:20:50.248 ************************************ 00:20:50.248 12:22:59 -- common/autotest_common.sh@1142 -- # return 0 00:20:50.248 12:22:59 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:20:50.248 12:22:59 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:20:50.248 12:22:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:50.248 12:22:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.248 12:22:59 -- common/autotest_common.sh@10 -- # set +x 00:20:50.248 ************************************ 00:20:50.248 START TEST nvme_rpc 00:20:50.248 ************************************ 00:20:50.248 12:22:59 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:20:50.248 * Looking for test storage... 00:20:50.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:50.248 12:22:59 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:50.248 12:22:59 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:20:50.248 12:22:59 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:20:50.248 12:22:59 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:20:50.248 12:22:59 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:20:50.248 12:22:59 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:20:50.248 12:22:59 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:20:50.248 12:22:59 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:20:50.248 12:22:59 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:50.248 12:22:59 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:50.248 12:22:59 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:20:50.506 12:22:59 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:20:50.506 12:22:59 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:50.506 12:22:59 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:20:50.506 12:22:59 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:20:50.506 12:22:59 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=72809 00:20:50.506 12:22:59 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:20:50.506 12:22:59 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:20:50.506 12:22:59 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 72809 00:20:50.506 12:22:59 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 72809 ']' 00:20:50.506 12:22:59 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.506 12:22:59 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.506 12:22:59 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.506 12:22:59 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.506 12:22:59 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:50.506 [2024-07-10 12:22:59.923041] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:20:50.506 [2024-07-10 12:22:59.923174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72809 ] 00:20:50.764 [2024-07-10 12:23:00.093970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:51.022 [2024-07-10 12:23:00.378886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.022 [2024-07-10 12:23:00.378933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.394 12:23:01 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:52.394 12:23:01 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:20:52.394 12:23:01 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:20:52.394 Nvme0n1 00:20:52.394 12:23:01 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:20:52.394 12:23:01 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:20:52.651 request: 00:20:52.651 { 00:20:52.651 "bdev_name": "Nvme0n1", 00:20:52.651 "filename": "non_existing_file", 00:20:52.651 "method": "bdev_nvme_apply_firmware", 00:20:52.651 "req_id": 1 00:20:52.651 } 00:20:52.651 Got JSON-RPC error response 00:20:52.651 response: 00:20:52.651 { 00:20:52.651 "code": -32603, 00:20:52.651 "message": "open file failed." 00:20:52.651 } 00:20:52.651 12:23:01 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:20:52.651 12:23:01 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:20:52.651 12:23:01 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:20:52.651 12:23:02 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:20:52.651 12:23:02 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 72809 00:20:52.651 12:23:02 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 72809 ']' 00:20:52.651 12:23:02 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 72809 00:20:52.651 12:23:02 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:20:52.651 12:23:02 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:52.651 12:23:02 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72809 00:20:52.651 12:23:02 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:52.651 killing process with pid 72809 00:20:52.651 12:23:02 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:52.651 12:23:02 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72809' 00:20:52.651 12:23:02 nvme_rpc -- common/autotest_common.sh@967 -- # kill 72809 00:20:52.651 12:23:02 nvme_rpc -- common/autotest_common.sh@972 -- # wait 72809 00:20:55.932 ************************************ 00:20:55.932 END TEST nvme_rpc 00:20:55.932 ************************************ 00:20:55.932 00:20:55.932 real 0m5.248s 00:20:55.932 user 0m9.207s 00:20:55.932 sys 0m0.900s 00:20:55.932 12:23:04 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:55.932 12:23:04 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:55.932 12:23:04 -- common/autotest_common.sh@1142 -- # return 0 00:20:55.932 12:23:04 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:20:55.932 12:23:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:55.932 12:23:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:55.932 12:23:04 -- common/autotest_common.sh@10 -- # set +x 00:20:55.933 ************************************ 00:20:55.933 START TEST nvme_rpc_timeouts 00:20:55.933 ************************************ 00:20:55.933 12:23:04 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:20:55.933 * Looking for test storage... 00:20:55.933 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:55.933 12:23:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.933 12:23:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_72890 00:20:55.933 12:23:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_72890 00:20:55.933 12:23:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=72920 00:20:55.933 12:23:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:20:55.933 12:23:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:20:55.933 12:23:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 72920 00:20:55.933 12:23:05 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 72920 ']' 00:20:55.933 12:23:05 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.933 12:23:05 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.933 12:23:05 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.933 12:23:05 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.933 12:23:05 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:20:55.933 [2024-07-10 12:23:05.147390] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:20:55.933 [2024-07-10 12:23:05.147525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72920 ] 00:20:55.933 [2024-07-10 12:23:05.322025] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:56.191 [2024-07-10 12:23:05.610673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.191 [2024-07-10 12:23:05.610705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.567 12:23:06 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.567 12:23:06 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:20:57.567 Checking default timeout settings: 00:20:57.567 12:23:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:20:57.567 12:23:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:57.567 Making settings changes with rpc: 00:20:57.567 12:23:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:20:57.567 12:23:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:20:57.824 Check default vs. modified settings: 00:20:57.824 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:20:57.824 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_72890 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_72890 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:20:58.082 Setting action_on_timeout is changed as expected. 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_72890 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_72890 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:20:58.082 Setting timeout_us is changed as expected. 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_72890 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:20:58.082 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:58.340 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:20:58.340 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_72890 00:20:58.340 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:20:58.340 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:20:58.340 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:20:58.340 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:20:58.340 Setting timeout_admin_us is changed as expected. 00:20:58.340 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:20:58.340 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:20:58.340 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_72890 /tmp/settings_modified_72890 00:20:58.340 12:23:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 72920 00:20:58.340 12:23:07 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 72920 ']' 00:20:58.340 12:23:07 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 72920 00:20:58.340 12:23:07 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:20:58.340 12:23:07 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.340 12:23:07 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72920 00:20:58.340 12:23:07 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:58.340 12:23:07 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:58.340 12:23:07 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72920' 00:20:58.340 killing process with pid 72920 00:20:58.340 12:23:07 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 72920 00:20:58.340 12:23:07 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 72920 00:21:01.621 RPC TIMEOUT SETTING TEST PASSED. 00:21:01.621 12:23:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:21:01.621 00:21:01.621 real 0m5.541s 00:21:01.621 user 0m9.998s 00:21:01.621 sys 0m0.917s 00:21:01.621 12:23:10 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:01.621 12:23:10 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:21:01.621 ************************************ 00:21:01.621 END TEST nvme_rpc_timeouts 00:21:01.621 ************************************ 00:21:01.621 12:23:10 -- common/autotest_common.sh@1142 -- # return 0 00:21:01.621 12:23:10 -- spdk/autotest.sh@243 -- # uname -s 00:21:01.621 12:23:10 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:21:01.621 12:23:10 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:21:01.621 12:23:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:01.621 12:23:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.621 12:23:10 -- common/autotest_common.sh@10 -- # set +x 00:21:01.621 ************************************ 00:21:01.621 START TEST sw_hotplug 00:21:01.621 ************************************ 00:21:01.621 12:23:10 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:21:01.621 * Looking for test storage... 00:21:01.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:01.621 12:23:10 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:01.879 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:01.879 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:01.879 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:01.879 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:01.879 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:02.146 12:23:11 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:21:02.146 12:23:11 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:21:02.146 12:23:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:21:02.146 12:23:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@230 -- # local class 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@15 -- # local i 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@15 -- # local i 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@15 -- # local i 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@15 -- # local i 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:21:02.146 12:23:11 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:21:02.146 12:23:11 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:21:02.146 12:23:11 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:21:02.146 12:23:11 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:02.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:02.987 Waiting for block devices as requested 00:21:02.987 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:02.987 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:03.245 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:21:03.245 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:21:08.586 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:21:08.586 12:23:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:21:08.586 12:23:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:08.845 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:21:09.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:09.104 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:21:09.363 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:21:09.621 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:09.621 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:09.880 12:23:19 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:21:09.880 12:23:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:09.880 12:23:19 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:21:09.880 12:23:19 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:21:09.880 12:23:19 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=73794 00:21:09.880 12:23:19 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:21:09.880 12:23:19 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:21:09.880 12:23:19 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:21:09.880 12:23:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:21:09.880 12:23:19 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:21:09.880 12:23:19 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:21:09.880 12:23:19 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:21:09.880 12:23:19 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:21:09.880 12:23:19 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:21:09.880 12:23:19 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:21:09.880 12:23:19 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:21:09.880 12:23:19 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:21:09.880 12:23:19 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:21:09.880 12:23:19 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:21:10.139 Initializing NVMe Controllers 00:21:10.139 Attaching to 0000:00:10.0 00:21:10.139 Attaching to 0000:00:11.0 00:21:10.139 Attached to 0000:00:11.0 00:21:10.139 Attached to 0000:00:10.0 00:21:10.139 Initialization complete. Starting I/O... 00:21:10.139 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:21:10.139 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:21:10.139 00:21:11.074 QEMU NVMe Ctrl (12341 ): 1556 I/Os completed (+1556) 00:21:11.074 QEMU NVMe Ctrl (12340 ): 1564 I/Os completed (+1564) 00:21:11.074 00:21:12.501 QEMU NVMe Ctrl (12341 ): 3696 I/Os completed (+2140) 00:21:12.501 QEMU NVMe Ctrl (12340 ): 3704 I/Os completed (+2140) 00:21:12.501 00:21:13.068 QEMU NVMe Ctrl (12341 ): 5932 I/Os completed (+2236) 00:21:13.068 QEMU NVMe Ctrl (12340 ): 5940 I/Os completed (+2236) 00:21:13.068 00:21:14.444 QEMU NVMe Ctrl (12341 ): 8024 I/Os completed (+2092) 00:21:14.444 QEMU NVMe Ctrl (12340 ): 8032 I/Os completed (+2092) 00:21:14.444 00:21:15.380 QEMU NVMe Ctrl (12341 ): 10272 I/Os completed (+2248) 00:21:15.380 QEMU NVMe Ctrl (12340 ): 10281 I/Os completed (+2249) 00:21:15.380 00:21:15.946 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:15.946 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:15.946 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:15.946 [2024-07-10 12:23:25.278815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:21:15.946 Controller removed: QEMU NVMe Ctrl (12340 ) 00:21:15.946 [2024-07-10 12:23:25.280661] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.946 [2024-07-10 12:23:25.280739] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.946 [2024-07-10 12:23:25.280763] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.946 [2024-07-10 12:23:25.280786] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.946 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:15.946 [2024-07-10 12:23:25.283533] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.946 [2024-07-10 12:23:25.283587] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.946 [2024-07-10 12:23:25.283605] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.946 [2024-07-10 12:23:25.283624] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.947 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:15.947 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:15.947 [2024-07-10 12:23:25.318357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:21:15.947 Controller removed: QEMU NVMe Ctrl (12341 ) 00:21:15.947 [2024-07-10 12:23:25.319882] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.947 [2024-07-10 12:23:25.319933] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.947 [2024-07-10 12:23:25.319960] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.947 [2024-07-10 12:23:25.319981] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.947 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:15.947 [2024-07-10 12:23:25.322552] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.947 [2024-07-10 12:23:25.322594] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.947 [2024-07-10 12:23:25.322615] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.947 [2024-07-10 12:23:25.322631] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:15.947 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:15.947 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:21:15.947 EAL: Scan for (pci) bus failed. 00:21:15.947 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:16.205 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:16.205 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:16.205 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:16.205 00:21:16.205 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:16.205 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:16.205 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:16.205 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:16.205 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:16.205 Attaching to 0000:00:10.0 00:21:16.205 Attached to 0000:00:10.0 00:21:16.205 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:16.205 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:16.205 12:23:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:16.205 Attaching to 0000:00:11.0 00:21:16.205 Attached to 0000:00:11.0 00:21:17.142 QEMU NVMe Ctrl (12340 ): 2032 I/Os completed (+2032) 00:21:17.142 QEMU NVMe Ctrl (12341 ): 1784 I/Os completed (+1784) 00:21:17.142 00:21:18.078 QEMU NVMe Ctrl (12340 ): 4256 I/Os completed (+2224) 00:21:18.078 QEMU NVMe Ctrl (12341 ): 4008 I/Os completed (+2224) 00:21:18.078 00:21:19.104 QEMU NVMe Ctrl (12340 ): 6436 I/Os completed (+2180) 00:21:19.104 QEMU NVMe Ctrl (12341 ): 6188 I/Os completed (+2180) 00:21:19.104 00:21:20.042 QEMU NVMe Ctrl (12340 ): 8603 I/Os completed (+2167) 00:21:20.042 QEMU NVMe Ctrl (12341 ): 8357 I/Os completed (+2169) 00:21:20.042 00:21:21.418 QEMU NVMe Ctrl (12340 ): 10823 I/Os completed (+2220) 00:21:21.418 QEMU NVMe Ctrl (12341 ): 10578 I/Os completed (+2221) 00:21:21.418 00:21:22.355 QEMU NVMe Ctrl (12340 ): 12975 I/Os completed (+2152) 00:21:22.355 QEMU NVMe Ctrl (12341 ): 12731 I/Os completed (+2153) 00:21:22.355 00:21:23.291 QEMU NVMe Ctrl (12340 ): 15199 I/Os completed (+2224) 00:21:23.291 QEMU NVMe Ctrl (12341 ): 14955 I/Os completed (+2224) 00:21:23.291 00:21:24.225 QEMU NVMe Ctrl (12340 ): 17427 I/Os completed (+2228) 00:21:24.225 QEMU NVMe Ctrl (12341 ): 17183 I/Os completed (+2228) 00:21:24.225 00:21:25.160 QEMU NVMe Ctrl (12340 ): 19651 I/Os completed (+2224) 00:21:25.160 QEMU NVMe Ctrl (12341 ): 19407 I/Os completed (+2224) 00:21:25.160 00:21:26.095 QEMU NVMe Ctrl (12340 ): 21891 I/Os completed (+2240) 00:21:26.095 QEMU NVMe Ctrl (12341 ): 21647 I/Os completed (+2240) 00:21:26.095 00:21:27.077 QEMU NVMe Ctrl (12340 ): 24015 I/Os completed (+2124) 00:21:27.077 QEMU NVMe Ctrl (12341 ): 23771 I/Os completed (+2124) 00:21:27.077 00:21:28.012 QEMU NVMe Ctrl (12340 ): 26139 I/Os completed (+2124) 00:21:28.012 QEMU NVMe Ctrl (12341 ): 25896 I/Os completed (+2125) 00:21:28.012 00:21:28.268 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:28.268 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:28.268 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:28.268 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:28.268 [2024-07-10 12:23:37.686270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:21:28.268 Controller removed: QEMU NVMe Ctrl (12340 ) 00:21:28.268 [2024-07-10 12:23:37.691601] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 [2024-07-10 12:23:37.691806] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 [2024-07-10 12:23:37.691879] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 [2024-07-10 12:23:37.691952] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:28.268 [2024-07-10 12:23:37.699947] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 [2024-07-10 12:23:37.700046] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 [2024-07-10 12:23:37.700084] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 [2024-07-10 12:23:37.700125] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:28.268 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:28.268 [2024-07-10 12:23:37.723075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:21:28.268 Controller removed: QEMU NVMe Ctrl (12341 ) 00:21:28.268 [2024-07-10 12:23:37.724727] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 [2024-07-10 12:23:37.724792] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 [2024-07-10 12:23:37.724822] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 [2024-07-10 12:23:37.724855] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:28.268 [2024-07-10 12:23:37.727457] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 [2024-07-10 12:23:37.727501] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 [2024-07-10 12:23:37.727522] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.268 [2024-07-10 12:23:37.727543] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:28.269 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:28.269 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:28.269 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:21:28.269 EAL: Scan for (pci) bus failed. 00:21:28.526 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:28.526 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:28.526 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:28.526 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:28.526 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:28.526 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:28.526 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:28.526 12:23:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:28.526 Attaching to 0000:00:10.0 00:21:28.526 Attached to 0000:00:10.0 00:21:28.783 12:23:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:28.783 12:23:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:28.783 12:23:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:28.783 Attaching to 0000:00:11.0 00:21:28.783 Attached to 0000:00:11.0 00:21:29.041 QEMU NVMe Ctrl (12340 ): 1144 I/Os completed (+1144) 00:21:29.041 QEMU NVMe Ctrl (12341 ): 912 I/Os completed (+912) 00:21:29.041 00:21:30.415 QEMU NVMe Ctrl (12340 ): 3320 I/Os completed (+2176) 00:21:30.415 QEMU NVMe Ctrl (12341 ): 3092 I/Os completed (+2180) 00:21:30.415 00:21:31.350 QEMU NVMe Ctrl (12340 ): 5520 I/Os completed (+2200) 00:21:31.350 QEMU NVMe Ctrl (12341 ): 5292 I/Os completed (+2200) 00:21:31.350 00:21:32.285 QEMU NVMe Ctrl (12340 ): 7736 I/Os completed (+2216) 00:21:32.286 QEMU NVMe Ctrl (12341 ): 7508 I/Os completed (+2216) 00:21:32.286 00:21:33.222 QEMU NVMe Ctrl (12340 ): 9964 I/Os completed (+2228) 00:21:33.222 QEMU NVMe Ctrl (12341 ): 9736 I/Os completed (+2228) 00:21:33.222 00:21:34.155 QEMU NVMe Ctrl (12340 ): 12184 I/Os completed (+2220) 00:21:34.155 QEMU NVMe Ctrl (12341 ): 11956 I/Os completed (+2220) 00:21:34.155 00:21:35.091 QEMU NVMe Ctrl (12340 ): 14368 I/Os completed (+2184) 00:21:35.091 QEMU NVMe Ctrl (12341 ): 14142 I/Os completed (+2186) 00:21:35.091 00:21:36.023 QEMU NVMe Ctrl (12340 ): 16552 I/Os completed (+2184) 00:21:36.023 QEMU NVMe Ctrl (12341 ): 16329 I/Os completed (+2187) 00:21:36.023 00:21:36.997 QEMU NVMe Ctrl (12340 ): 18752 I/Os completed (+2200) 00:21:36.997 QEMU NVMe Ctrl (12341 ): 18527 I/Os completed (+2198) 00:21:36.997 00:21:38.374 QEMU NVMe Ctrl (12340 ): 20944 I/Os completed (+2192) 00:21:38.374 QEMU NVMe Ctrl (12341 ): 20720 I/Os completed (+2193) 00:21:38.374 00:21:39.310 QEMU NVMe Ctrl (12340 ): 23132 I/Os completed (+2188) 00:21:39.310 QEMU NVMe Ctrl (12341 ): 22908 I/Os completed (+2188) 00:21:39.310 00:21:40.246 QEMU NVMe Ctrl (12340 ): 25320 I/Os completed (+2188) 00:21:40.246 QEMU NVMe Ctrl (12341 ): 25096 I/Os completed (+2188) 00:21:40.246 00:21:40.813 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:40.813 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:40.813 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:40.813 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:40.813 [2024-07-10 12:23:50.054370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:21:40.813 Controller removed: QEMU NVMe Ctrl (12340 ) 00:21:40.813 [2024-07-10 12:23:50.056078] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 [2024-07-10 12:23:50.056141] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 [2024-07-10 12:23:50.056164] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 [2024-07-10 12:23:50.056189] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:40.813 [2024-07-10 12:23:50.059159] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 [2024-07-10 12:23:50.059212] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 [2024-07-10 12:23:50.059231] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 [2024-07-10 12:23:50.059250] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:40.813 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:40.813 [2024-07-10 12:23:50.093692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:21:40.813 Controller removed: QEMU NVMe Ctrl (12341 ) 00:21:40.813 [2024-07-10 12:23:50.095316] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 [2024-07-10 12:23:50.095370] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 [2024-07-10 12:23:50.095395] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 [2024-07-10 12:23:50.095415] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:40.813 [2024-07-10 12:23:50.098180] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 [2024-07-10 12:23:50.098227] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 [2024-07-10 12:23:50.098252] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 [2024-07-10 12:23:50.098270] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:40.813 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:21:40.813 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:40.813 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:40.813 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:40.813 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:41.072 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:41.072 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:41.072 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:41.072 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:41.072 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:41.072 Attaching to 0000:00:10.0 00:21:41.072 Attached to 0000:00:10.0 00:21:41.072 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:41.072 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:41.072 12:23:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:41.072 Attaching to 0000:00:11.0 00:21:41.072 Attached to 0000:00:11.0 00:21:41.072 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:21:41.072 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:21:41.072 [2024-07-10 12:23:50.444308] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:21:53.292 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:21:53.292 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:53.292 12:24:02 sw_hotplug -- common/autotest_common.sh@715 -- # time=43.16 00:21:53.292 12:24:02 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.16 00:21:53.292 12:24:02 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:21:53.292 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.16 00:21:53.292 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.16 2 00:21:53.292 remove_attach_helper took 43.16s to complete (handling 2 nvme drive(s)) 12:24:02 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:21:59.853 12:24:08 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 73794 00:21:59.853 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (73794) - No such process 00:21:59.853 12:24:08 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 73794 00:21:59.853 12:24:08 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:21:59.853 12:24:08 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:21:59.853 12:24:08 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:21:59.853 12:24:08 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=74335 00:21:59.853 12:24:08 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:59.853 12:24:08 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:21:59.853 12:24:08 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 74335 00:21:59.853 12:24:08 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 74335 ']' 00:21:59.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.853 12:24:08 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.853 12:24:08 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.853 12:24:08 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.853 12:24:08 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.853 12:24:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:59.853 [2024-07-10 12:24:08.557258] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:21:59.853 [2024-07-10 12:24:08.557631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74335 ] 00:21:59.853 [2024-07-10 12:24:08.728457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.853 [2024-07-10 12:24:09.008161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.790 12:24:10 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.790 12:24:10 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:22:00.790 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:22:00.790 12:24:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.790 12:24:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:00.790 12:24:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.790 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:22:00.790 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:22:00.790 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:22:00.790 12:24:10 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:22:00.790 12:24:10 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:22:00.790 12:24:10 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:22:00.790 12:24:10 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:22:00.790 12:24:10 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:22:00.791 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:22:00.791 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:22:00.791 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:22:00.791 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:22:00.791 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:07.357 12:24:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.357 12:24:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:07.357 [2024-07-10 12:24:16.142080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:22:07.357 [2024-07-10 12:24:16.145051] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.357 [2024-07-10 12:24:16.145098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.357 [2024-07-10 12:24:16.145122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.357 [2024-07-10 12:24:16.145150] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.357 [2024-07-10 12:24:16.145168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.357 [2024-07-10 12:24:16.145184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.357 [2024-07-10 12:24:16.145206] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.357 [2024-07-10 12:24:16.145230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.357 [2024-07-10 12:24:16.145249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.357 [2024-07-10 12:24:16.145265] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.357 [2024-07-10 12:24:16.145284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.357 [2024-07-10 12:24:16.145300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.357 12:24:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:07.357 [2024-07-10 12:24:16.641280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:22:07.357 [2024-07-10 12:24:16.643788] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.357 [2024-07-10 12:24:16.643843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.357 [2024-07-10 12:24:16.643862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.357 [2024-07-10 12:24:16.643888] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.357 [2024-07-10 12:24:16.643900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.357 [2024-07-10 12:24:16.643916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.357 [2024-07-10 12:24:16.643929] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.357 [2024-07-10 12:24:16.643943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.357 [2024-07-10 12:24:16.643956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.357 [2024-07-10 12:24:16.643971] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:07.357 [2024-07-10 12:24:16.643983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.357 [2024-07-10 12:24:16.643997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:07.357 12:24:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.357 12:24:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:07.357 12:24:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:07.357 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:07.616 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:07.616 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:07.616 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:07.616 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:07.616 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:07.616 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:07.616 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:07.616 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:07.616 12:24:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:07.616 12:24:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:07.616 12:24:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:19.823 12:24:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.823 12:24:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:19.823 12:24:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:19.823 12:24:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.823 12:24:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:19.823 [2024-07-10 12:24:29.221047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:22:19.823 [2024-07-10 12:24:29.223576] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:19.823 [2024-07-10 12:24:29.223627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.823 [2024-07-10 12:24:29.223648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-07-10 12:24:29.223671] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:19.823 [2024-07-10 12:24:29.223686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.823 [2024-07-10 12:24:29.223699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-07-10 12:24:29.223716] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:19.823 [2024-07-10 12:24:29.223739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.823 [2024-07-10 12:24:29.223755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 [2024-07-10 12:24:29.223768] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:19.823 [2024-07-10 12:24:29.223782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.823 [2024-07-10 12:24:29.223794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.823 12:24:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:22:19.823 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:20.391 [2024-07-10 12:24:29.720271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:22:20.391 [2024-07-10 12:24:29.722891] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:20.391 [2024-07-10 12:24:29.722943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.391 [2024-07-10 12:24:29.722961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.391 [2024-07-10 12:24:29.722989] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:20.391 [2024-07-10 12:24:29.723002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.391 [2024-07-10 12:24:29.723017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.391 [2024-07-10 12:24:29.723031] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:20.391 [2024-07-10 12:24:29.723046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.391 [2024-07-10 12:24:29.723058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.391 [2024-07-10 12:24:29.723074] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:20.391 [2024-07-10 12:24:29.723087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.391 [2024-07-10 12:24:29.723101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.391 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:22:20.391 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:20.391 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:20.391 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:20.391 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:20.391 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:20.391 12:24:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.391 12:24:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:20.391 12:24:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.391 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:20.391 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:20.650 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:20.650 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:20.650 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:20.650 12:24:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:20.650 12:24:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:20.650 12:24:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:20.650 12:24:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:20.650 12:24:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:20.650 12:24:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:20.650 12:24:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:20.650 12:24:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:32.863 12:24:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.863 12:24:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:32.863 12:24:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:32.863 [2024-07-10 12:24:42.200180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:22:32.863 [2024-07-10 12:24:42.203007] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:32.863 [2024-07-10 12:24:42.203157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.863 [2024-07-10 12:24:42.203309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.863 [2024-07-10 12:24:42.203378] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:32.863 [2024-07-10 12:24:42.203417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.863 [2024-07-10 12:24:42.203519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.863 [2024-07-10 12:24:42.203584] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:32.863 [2024-07-10 12:24:42.203620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.863 [2024-07-10 12:24:42.203723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.863 [2024-07-10 12:24:42.203855] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:32.863 [2024-07-10 12:24:42.203903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.863 [2024-07-10 12:24:42.203999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:32.863 12:24:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.863 12:24:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:32.863 12:24:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:22:32.863 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:33.121 [2024-07-10 12:24:42.599561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:22:33.379 [2024-07-10 12:24:42.602216] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:33.379 [2024-07-10 12:24:42.602374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.379 [2024-07-10 12:24:42.602400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.379 [2024-07-10 12:24:42.602428] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:33.379 [2024-07-10 12:24:42.602441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.379 [2024-07-10 12:24:42.602456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.379 [2024-07-10 12:24:42.602469] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:33.379 [2024-07-10 12:24:42.602484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.379 [2024-07-10 12:24:42.602496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.379 [2024-07-10 12:24:42.602515] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:33.379 [2024-07-10 12:24:42.602527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.379 [2024-07-10 12:24:42.602542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.379 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:22:33.379 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:33.379 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:33.379 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:33.379 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:33.379 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:33.379 12:24:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.379 12:24:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:33.379 12:24:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.379 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:33.379 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:33.637 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:33.637 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:33.637 12:24:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:33.637 12:24:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:33.637 12:24:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:33.637 12:24:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:33.637 12:24:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:33.637 12:24:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:33.894 12:24:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:33.894 12:24:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:33.894 12:24:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.16 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.16 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.16 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.16 2 00:22:46.139 remove_attach_helper took 45.16s to complete (handling 2 nvme drive(s)) 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:22:46.139 12:24:55 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:22:46.139 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:22:46.140 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:22:46.140 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:22:46.140 12:24:55 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:52.709 12:25:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.709 12:25:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:52.709 [2024-07-10 12:25:01.348861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:22:52.709 [2024-07-10 12:25:01.350772] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:52.709 [2024-07-10 12:25:01.350816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.709 [2024-07-10 12:25:01.350836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.709 [2024-07-10 12:25:01.350863] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:52.709 [2024-07-10 12:25:01.350878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.709 [2024-07-10 12:25:01.350892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.709 [2024-07-10 12:25:01.350908] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:52.709 [2024-07-10 12:25:01.350921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.709 [2024-07-10 12:25:01.350936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.709 [2024-07-10 12:25:01.350949] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:52.709 [2024-07-10 12:25:01.350963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.709 [2024-07-10 12:25:01.350975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.709 12:25:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:22:52.709 [2024-07-10 12:25:01.748263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:22:52.709 [2024-07-10 12:25:01.750239] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:52.709 [2024-07-10 12:25:01.750294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.709 [2024-07-10 12:25:01.750312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.709 [2024-07-10 12:25:01.750339] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:52.709 [2024-07-10 12:25:01.750352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.709 [2024-07-10 12:25:01.750368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.709 [2024-07-10 12:25:01.750382] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:52.709 [2024-07-10 12:25:01.750397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.709 [2024-07-10 12:25:01.750409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.709 [2024-07-10 12:25:01.750426] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:22:52.709 [2024-07-10 12:25:01.750438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.709 [2024-07-10 12:25:01.750452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:52.709 12:25:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.709 12:25:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:52.709 12:25:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:22:52.709 12:25:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:22:52.709 12:25:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:52.709 12:25:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:52.709 12:25:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:22:52.709 12:25:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:22:52.709 12:25:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:52.709 12:25:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:22:52.709 12:25:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:22:52.709 12:25:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:22:52.968 12:25:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:22:52.968 12:25:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:22:52.968 12:25:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:05.173 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:05.173 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:05.173 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:05.173 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:05.173 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:05.173 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:05.173 12:25:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.173 12:25:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:05.173 12:25:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.173 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:23:05.173 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:05.173 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:05.173 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:05.173 [2024-07-10 12:25:14.328025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:23:05.173 [2024-07-10 12:25:14.330144] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:05.173 [2024-07-10 12:25:14.330190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.173 [2024-07-10 12:25:14.330214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.173 [2024-07-10 12:25:14.330238] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:05.173 [2024-07-10 12:25:14.330254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.173 [2024-07-10 12:25:14.330267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.173 [2024-07-10 12:25:14.330283] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:05.173 [2024-07-10 12:25:14.330295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.174 [2024-07-10 12:25:14.330310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.174 [2024-07-10 12:25:14.330322] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:05.174 [2024-07-10 12:25:14.330336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.174 [2024-07-10 12:25:14.330348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.174 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:05.174 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:05.174 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:23:05.174 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:05.174 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:05.174 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:05.174 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:05.174 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:05.174 12:25:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.174 12:25:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:05.174 12:25:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.174 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:23:05.174 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:23:05.433 [2024-07-10 12:25:14.727405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:23:05.433 [2024-07-10 12:25:14.729367] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:05.433 [2024-07-10 12:25:14.729419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.433 [2024-07-10 12:25:14.729438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.433 [2024-07-10 12:25:14.729462] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:05.433 [2024-07-10 12:25:14.729474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.433 [2024-07-10 12:25:14.729495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.433 [2024-07-10 12:25:14.729509] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:05.433 [2024-07-10 12:25:14.729523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.433 [2024-07-10 12:25:14.729535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.433 [2024-07-10 12:25:14.729552] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:05.433 [2024-07-10 12:25:14.729563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.433 [2024-07-10 12:25:14.729578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.692 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:23:05.692 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:05.692 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:05.692 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:05.692 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:05.692 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:05.692 12:25:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.692 12:25:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:05.692 12:25:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.692 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:23:05.692 12:25:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:05.692 12:25:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:05.692 12:25:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:05.692 12:25:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:05.692 12:25:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:05.692 12:25:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:05.692 12:25:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:05.692 12:25:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:05.692 12:25:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:23:05.951 12:25:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:23:05.951 12:25:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:05.951 12:25:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:18.156 12:25:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.156 12:25:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:18.156 12:25:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:18.156 12:25:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.156 12:25:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:18.156 [2024-07-10 12:25:27.407033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:23:18.156 [2024-07-10 12:25:27.408933] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:18.156 [2024-07-10 12:25:27.408978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.156 [2024-07-10 12:25:27.408998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.156 [2024-07-10 12:25:27.409022] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:18.156 [2024-07-10 12:25:27.409037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.156 [2024-07-10 12:25:27.409050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.156 [2024-07-10 12:25:27.409067] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:18.156 [2024-07-10 12:25:27.409079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.156 [2024-07-10 12:25:27.409097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.156 [2024-07-10 12:25:27.409110] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:18.156 [2024-07-10 12:25:27.409123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.156 [2024-07-10 12:25:27.409135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.156 12:25:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:23:18.156 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:23:18.414 [2024-07-10 12:25:27.806409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:23:18.415 [2024-07-10 12:25:27.809026] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:18.415 [2024-07-10 12:25:27.809080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.415 [2024-07-10 12:25:27.809098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.415 [2024-07-10 12:25:27.809124] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:18.415 [2024-07-10 12:25:27.809137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.415 [2024-07-10 12:25:27.809152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.415 [2024-07-10 12:25:27.809166] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:18.415 [2024-07-10 12:25:27.809182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.415 [2024-07-10 12:25:27.809195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.415 [2024-07-10 12:25:27.809210] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:18.415 [2024-07-10 12:25:27.809222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.415 [2024-07-10 12:25:27.809242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.673 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:23:18.673 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:18.673 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:18.673 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:18.673 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:18.673 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:18.673 12:25:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.673 12:25:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:18.673 12:25:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.673 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:23:18.673 12:25:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:18.673 12:25:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:18.673 12:25:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:18.673 12:25:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:18.932 12:25:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:18.932 12:25:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:18.932 12:25:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:18.932 12:25:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:18.932 12:25:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:23:18.932 12:25:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:23:18.932 12:25:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:18.932 12:25:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:31.157 12:25:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:23:31.157 12:25:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:23:31.157 12:25:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:23:31.157 12:25:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:31.157 12:25:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:31.157 12:25:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:31.157 12:25:40 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.157 12:25:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:31.157 12:25:40 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.157 12:25:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:23:31.157 12:25:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:31.157 12:25:40 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.07 00:23:31.157 12:25:40 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.07 00:23:31.157 12:25:40 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:23:31.157 12:25:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.07 00:23:31.157 12:25:40 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.07 2 00:23:31.157 remove_attach_helper took 45.07s to complete (handling 2 nvme drive(s)) 12:25:40 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:23:31.157 12:25:40 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 74335 00:23:31.157 12:25:40 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 74335 ']' 00:23:31.157 12:25:40 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 74335 00:23:31.157 12:25:40 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:23:31.157 12:25:40 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:31.157 12:25:40 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74335 00:23:31.157 12:25:40 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:31.157 12:25:40 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:31.157 killing process with pid 74335 00:23:31.157 12:25:40 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74335' 00:23:31.158 12:25:40 sw_hotplug -- common/autotest_common.sh@967 -- # kill 74335 00:23:31.158 12:25:40 sw_hotplug -- common/autotest_common.sh@972 -- # wait 74335 00:23:33.686 12:25:43 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:34.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:34.818 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:34.818 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:34.818 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:35.077 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:35.077 00:23:35.077 real 2m33.883s 00:23:35.077 user 1m51.510s 00:23:35.077 sys 0m22.480s 00:23:35.077 12:25:44 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:35.077 ************************************ 00:23:35.077 END TEST sw_hotplug 00:23:35.077 ************************************ 00:23:35.077 12:25:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:35.077 12:25:44 -- common/autotest_common.sh@1142 -- # return 0 00:23:35.077 12:25:44 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:23:35.077 12:25:44 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:23:35.077 12:25:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:35.077 12:25:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:35.077 12:25:44 -- common/autotest_common.sh@10 -- # set +x 00:23:35.077 ************************************ 00:23:35.077 START TEST nvme_xnvme 00:23:35.077 ************************************ 00:23:35.077 12:25:44 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:23:35.336 * Looking for test storage... 00:23:35.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:35.336 12:25:44 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:35.336 12:25:44 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.336 12:25:44 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.336 12:25:44 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.336 12:25:44 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.336 12:25:44 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.336 12:25:44 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.336 12:25:44 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:23:35.336 12:25:44 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.336 12:25:44 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:23:35.336 12:25:44 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:35.336 12:25:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:35.336 12:25:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:35.336 ************************************ 00:23:35.336 START TEST xnvme_to_malloc_dd_copy 00:23:35.336 ************************************ 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:23:35.336 12:25:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:23:35.336 { 00:23:35.336 "subsystems": [ 00:23:35.336 { 00:23:35.337 "subsystem": "bdev", 00:23:35.337 "config": [ 00:23:35.337 { 00:23:35.337 "params": { 00:23:35.337 "block_size": 512, 00:23:35.337 "num_blocks": 2097152, 00:23:35.337 "name": "malloc0" 00:23:35.337 }, 00:23:35.337 "method": "bdev_malloc_create" 00:23:35.337 }, 00:23:35.337 { 00:23:35.337 "params": { 00:23:35.337 "io_mechanism": "libaio", 00:23:35.337 "filename": "/dev/nullb0", 00:23:35.337 "name": "null0" 00:23:35.337 }, 00:23:35.337 "method": "bdev_xnvme_create" 00:23:35.337 }, 00:23:35.337 { 00:23:35.337 "method": "bdev_wait_for_examine" 00:23:35.337 } 00:23:35.337 ] 00:23:35.337 } 00:23:35.337 ] 00:23:35.337 } 00:23:35.337 [2024-07-10 12:25:44.737017] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:23:35.337 [2024-07-10 12:25:44.737143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75692 ] 00:23:35.596 [2024-07-10 12:25:44.908136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.855 [2024-07-10 12:25:45.153711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.604  Copying: 260/1024 [MB] (260 MBps) Copying: 521/1024 [MB] (261 MBps) Copying: 779/1024 [MB] (257 MBps) Copying: 1024/1024 [MB] (average 259 MBps) 00:23:46.604 00:23:46.604 12:25:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:23:46.604 12:25:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:23:46.604 12:25:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:23:46.604 12:25:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:23:46.604 { 00:23:46.604 "subsystems": [ 00:23:46.604 { 00:23:46.604 "subsystem": "bdev", 00:23:46.604 "config": [ 00:23:46.604 { 00:23:46.604 "params": { 00:23:46.604 "block_size": 512, 00:23:46.604 "num_blocks": 2097152, 00:23:46.604 "name": "malloc0" 00:23:46.604 }, 00:23:46.604 "method": "bdev_malloc_create" 00:23:46.604 }, 00:23:46.604 { 00:23:46.604 "params": { 00:23:46.604 "io_mechanism": "libaio", 00:23:46.604 "filename": "/dev/nullb0", 00:23:46.604 "name": "null0" 00:23:46.604 }, 00:23:46.604 "method": "bdev_xnvme_create" 00:23:46.604 }, 00:23:46.604 { 00:23:46.604 "method": "bdev_wait_for_examine" 00:23:46.604 } 00:23:46.604 ] 00:23:46.604 } 00:23:46.604 ] 00:23:46.604 } 00:23:46.604 [2024-07-10 12:25:55.233275] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:23:46.604 [2024-07-10 12:25:55.233424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75812 ] 00:23:46.604 [2024-07-10 12:25:55.406852] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.604 [2024-07-10 12:25:55.657581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.256  Copying: 262/1024 [MB] (262 MBps) Copying: 525/1024 [MB] (263 MBps) Copying: 792/1024 [MB] (267 MBps) Copying: 1024/1024 [MB] (average 264 MBps) 00:23:56.256 00:23:56.256 12:26:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:23:56.256 12:26:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:23:56.256 12:26:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:23:56.256 12:26:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:23:56.256 12:26:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:23:56.256 12:26:05 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:23:56.256 { 00:23:56.256 "subsystems": [ 00:23:56.256 { 00:23:56.256 "subsystem": "bdev", 00:23:56.256 "config": [ 00:23:56.256 { 00:23:56.256 "params": { 00:23:56.256 "block_size": 512, 00:23:56.256 "num_blocks": 2097152, 00:23:56.256 "name": "malloc0" 00:23:56.256 }, 00:23:56.256 "method": "bdev_malloc_create" 00:23:56.256 }, 00:23:56.257 { 00:23:56.257 "params": { 00:23:56.257 "io_mechanism": "io_uring", 00:23:56.257 "filename": "/dev/nullb0", 00:23:56.257 "name": "null0" 00:23:56.257 }, 00:23:56.257 "method": "bdev_xnvme_create" 00:23:56.257 }, 00:23:56.257 { 00:23:56.257 "method": "bdev_wait_for_examine" 00:23:56.257 } 00:23:56.257 ] 00:23:56.257 } 00:23:56.257 ] 00:23:56.257 } 00:23:56.257 [2024-07-10 12:26:05.679206] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:23:56.257 [2024-07-10 12:26:05.679487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75927 ] 00:23:56.515 [2024-07-10 12:26:05.851558] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.773 [2024-07-10 12:26:06.088316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.421  Copying: 270/1024 [MB] (270 MBps) Copying: 544/1024 [MB] (273 MBps) Copying: 813/1024 [MB] (269 MBps) Copying: 1024/1024 [MB] (average 270 MBps) 00:24:07.421 00:24:07.421 12:26:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:24:07.421 12:26:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:24:07.421 12:26:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:24:07.421 12:26:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:24:07.421 { 00:24:07.421 "subsystems": [ 00:24:07.421 { 00:24:07.421 "subsystem": "bdev", 00:24:07.421 "config": [ 00:24:07.421 { 00:24:07.421 "params": { 00:24:07.421 "block_size": 512, 00:24:07.421 "num_blocks": 2097152, 00:24:07.421 "name": "malloc0" 00:24:07.421 }, 00:24:07.421 "method": "bdev_malloc_create" 00:24:07.421 }, 00:24:07.421 { 00:24:07.421 "params": { 00:24:07.421 "io_mechanism": "io_uring", 00:24:07.421 "filename": "/dev/nullb0", 00:24:07.421 "name": "null0" 00:24:07.421 }, 00:24:07.421 "method": "bdev_xnvme_create" 00:24:07.421 }, 00:24:07.421 { 00:24:07.421 "method": "bdev_wait_for_examine" 00:24:07.421 } 00:24:07.421 ] 00:24:07.421 } 00:24:07.421 ] 00:24:07.421 } 00:24:07.421 [2024-07-10 12:26:16.046283] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:07.421 [2024-07-10 12:26:16.046410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76043 ] 00:24:07.421 [2024-07-10 12:26:16.217313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.421 [2024-07-10 12:26:16.460037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.946  Copying: 272/1024 [MB] (272 MBps) Copying: 545/1024 [MB] (273 MBps) Copying: 822/1024 [MB] (276 MBps) Copying: 1024/1024 [MB] (average 275 MBps) 00:24:16.946 00:24:16.946 12:26:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:24:16.946 12:26:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:24:16.946 00:24:16.946 real 0m41.692s 00:24:16.946 user 0m36.695s 00:24:16.946 sys 0m4.479s 00:24:16.946 ************************************ 00:24:16.946 END TEST xnvme_to_malloc_dd_copy 00:24:16.946 ************************************ 00:24:16.946 12:26:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:16.946 12:26:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:24:16.946 12:26:26 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:24:16.946 12:26:26 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:24:16.946 12:26:26 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:16.946 12:26:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:16.946 12:26:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:16.946 ************************************ 00:24:16.946 START TEST xnvme_bdevperf 00:24:16.946 ************************************ 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:24:16.946 12:26:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:17.204 { 00:24:17.204 "subsystems": [ 00:24:17.204 { 00:24:17.204 "subsystem": "bdev", 00:24:17.204 "config": [ 00:24:17.204 { 00:24:17.204 "params": { 00:24:17.204 "io_mechanism": "libaio", 00:24:17.204 "filename": "/dev/nullb0", 00:24:17.204 "name": "null0" 00:24:17.204 }, 00:24:17.204 "method": "bdev_xnvme_create" 00:24:17.204 }, 00:24:17.204 { 00:24:17.204 "method": "bdev_wait_for_examine" 00:24:17.204 } 00:24:17.204 ] 00:24:17.204 } 00:24:17.204 ] 00:24:17.204 } 00:24:17.204 [2024-07-10 12:26:26.500701] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:17.204 [2024-07-10 12:26:26.500836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76186 ] 00:24:17.204 [2024-07-10 12:26:26.672023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.462 [2024-07-10 12:26:26.916617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.028 Running I/O for 5 seconds... 00:24:23.318 00:24:23.318 Latency(us) 00:24:23.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.318 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:24:23.318 null0 : 5.00 158543.82 619.31 0.00 0.00 401.21 129.95 2895.16 00:24:23.318 =================================================================================================================== 00:24:23.318 Total : 158543.82 619.31 0.00 0.00 401.21 129.95 2895.16 00:24:24.251 12:26:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:24:24.251 12:26:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:24:24.251 12:26:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:24:24.251 12:26:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:24:24.251 12:26:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:24:24.251 12:26:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:24.251 { 00:24:24.251 "subsystems": [ 00:24:24.251 { 00:24:24.251 "subsystem": "bdev", 00:24:24.251 "config": [ 00:24:24.251 { 00:24:24.251 "params": { 00:24:24.251 "io_mechanism": "io_uring", 00:24:24.251 "filename": "/dev/nullb0", 00:24:24.251 "name": "null0" 00:24:24.251 }, 00:24:24.251 "method": "bdev_xnvme_create" 00:24:24.251 }, 00:24:24.251 { 00:24:24.251 "method": "bdev_wait_for_examine" 00:24:24.251 } 00:24:24.251 ] 00:24:24.251 } 00:24:24.251 ] 00:24:24.251 } 00:24:24.251 [2024-07-10 12:26:33.712953] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:24.251 [2024-07-10 12:26:33.713104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76272 ] 00:24:24.508 [2024-07-10 12:26:33.886450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.765 [2024-07-10 12:26:34.130925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.335 Running I/O for 5 seconds... 00:24:30.608 00:24:30.608 Latency(us) 00:24:30.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.608 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:24:30.608 null0 : 5.00 202787.97 792.14 0.00 0.00 313.09 190.82 648.12 00:24:30.608 =================================================================================================================== 00:24:30.608 Total : 202787.97 792.14 0.00 0.00 313.09 190.82 648.12 00:24:31.542 12:26:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:24:31.542 12:26:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:24:31.542 ************************************ 00:24:31.542 END TEST xnvme_bdevperf 00:24:31.542 ************************************ 00:24:31.542 00:24:31.542 real 0m14.490s 00:24:31.542 user 0m11.219s 00:24:31.542 sys 0m3.072s 00:24:31.542 12:26:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:31.542 12:26:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:31.542 12:26:40 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:24:31.542 ************************************ 00:24:31.542 END TEST nvme_xnvme 00:24:31.542 ************************************ 00:24:31.542 00:24:31.542 real 0m56.468s 00:24:31.542 user 0m48.016s 00:24:31.542 sys 0m7.736s 00:24:31.542 12:26:40 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:31.542 12:26:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:31.543 12:26:40 -- common/autotest_common.sh@1142 -- # return 0 00:24:31.543 12:26:40 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:24:31.543 12:26:40 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:31.543 12:26:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:31.543 12:26:40 -- common/autotest_common.sh@10 -- # set +x 00:24:31.543 ************************************ 00:24:31.543 START TEST blockdev_xnvme 00:24:31.543 ************************************ 00:24:31.543 12:26:41 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:24:31.801 * Looking for test storage... 00:24:31.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=76419 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 76419 00:24:31.801 12:26:41 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:24:31.801 12:26:41 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 76419 ']' 00:24:31.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.801 12:26:41 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.801 12:26:41 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.801 12:26:41 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.801 12:26:41 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.801 12:26:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:31.801 [2024-07-10 12:26:41.227143] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:31.801 [2024-07-10 12:26:41.227288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76419 ] 00:24:32.058 [2024-07-10 12:26:41.397018] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.316 [2024-07-10 12:26:41.644476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.249 12:26:42 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:33.249 12:26:42 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:24:33.249 12:26:42 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:24:33.249 12:26:42 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:24:33.249 12:26:42 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:24:33.249 12:26:42 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:24:33.249 12:26:42 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:33.850 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:34.108 Waiting for block devices as requested 00:24:34.108 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:34.108 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:34.365 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:24:34.365 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:24:39.633 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:24:39.633 nvme0n1 00:24:39.633 nvme1n1 00:24:39.633 nvme2n1 00:24:39.633 nvme2n2 00:24:39.633 nvme2n3 00:24:39.633 nvme3n1 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.633 12:26:48 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.633 12:26:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:39.633 12:26:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.633 12:26:49 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:24:39.633 12:26:49 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.633 12:26:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:39.633 12:26:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.633 12:26:49 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:24:39.633 12:26:49 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:24:39.633 12:26:49 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:24:39.633 12:26:49 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.633 12:26:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:39.633 12:26:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.633 12:26:49 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:24:39.633 12:26:49 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:24:39.892 12:26:49 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "58fe55b2-0eb5-4e07-b893-ad9552d69434"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "58fe55b2-0eb5-4e07-b893-ad9552d69434",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "41b7c466-15b3-48fc-b3bc-c005b5bd24ee"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "41b7c466-15b3-48fc-b3bc-c005b5bd24ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4c973795-d0cb-4f5e-9860-ac40dbcb2da4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4c973795-d0cb-4f5e-9860-ac40dbcb2da4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "c2601fa2-4b72-43c2-ac7a-86c996d0968a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c2601fa2-4b72-43c2-ac7a-86c996d0968a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "fa72f8f8-9f61-4443-ac7a-c3c19771c15c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fa72f8f8-9f61-4443-ac7a-c3c19771c15c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6b464291-9b60-4fe9-b90b-60c3db3f34a2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6b464291-9b60-4fe9-b90b-60c3db3f34a2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:24:39.892 12:26:49 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:24:39.892 12:26:49 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:24:39.892 12:26:49 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:24:39.892 12:26:49 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 76419 00:24:39.892 12:26:49 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 76419 ']' 00:24:39.892 12:26:49 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 76419 00:24:39.892 12:26:49 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:24:39.892 12:26:49 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:39.892 12:26:49 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76419 00:24:39.892 killing process with pid 76419 00:24:39.892 12:26:49 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:39.892 12:26:49 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:39.892 12:26:49 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76419' 00:24:39.892 12:26:49 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 76419 00:24:39.892 12:26:49 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 76419 00:24:42.505 12:26:51 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:42.505 12:26:51 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:24:42.505 12:26:51 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:24:42.505 12:26:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:42.505 12:26:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:42.505 ************************************ 00:24:42.505 START TEST bdev_hello_world 00:24:42.505 ************************************ 00:24:42.505 12:26:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:24:42.505 [2024-07-10 12:26:51.831397] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:42.505 [2024-07-10 12:26:51.831523] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76795 ] 00:24:42.763 [2024-07-10 12:26:52.002553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.021 [2024-07-10 12:26:52.245612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.279 [2024-07-10 12:26:52.724775] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:24:43.279 [2024-07-10 12:26:52.724834] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:24:43.279 [2024-07-10 12:26:52.724854] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:24:43.279 [2024-07-10 12:26:52.726887] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:24:43.279 [2024-07-10 12:26:52.727212] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:24:43.279 [2024-07-10 12:26:52.727234] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:24:43.279 [2024-07-10 12:26:52.727490] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:24:43.279 00:24:43.279 [2024-07-10 12:26:52.727511] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:24:44.653 00:24:44.653 real 0m2.313s 00:24:44.653 user 0m1.927s 00:24:44.653 sys 0m0.269s 00:24:44.653 12:26:54 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:44.653 12:26:54 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:24:44.653 ************************************ 00:24:44.653 END TEST bdev_hello_world 00:24:44.653 ************************************ 00:24:44.653 12:26:54 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:24:44.653 12:26:54 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:24:44.653 12:26:54 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:44.653 12:26:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:44.653 12:26:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:44.653 ************************************ 00:24:44.653 START TEST bdev_bounds 00:24:44.653 ************************************ 00:24:44.653 12:26:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:24:44.653 12:26:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:44.653 12:26:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=76841 00:24:44.653 Process bdevio pid: 76841 00:24:44.653 12:26:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:24:44.653 12:26:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 76841' 00:24:44.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.653 12:26:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 76841 00:24:44.653 12:26:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 76841 ']' 00:24:44.653 12:26:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.653 12:26:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.653 12:26:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.653 12:26:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.653 12:26:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:44.919 [2024-07-10 12:26:54.221626] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:44.919 [2024-07-10 12:26:54.221838] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76841 ] 00:24:45.178 [2024-07-10 12:26:54.399854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:45.178 [2024-07-10 12:26:54.650481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.178 [2024-07-10 12:26:54.650646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.178 [2024-07-10 12:26:54.650690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.744 12:26:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.744 12:26:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:24:45.744 12:26:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:46.002 I/O targets: 00:24:46.002 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:24:46.002 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:24:46.002 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:46.002 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:46.002 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:46.002 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:24:46.002 00:24:46.002 00:24:46.002 CUnit - A unit testing framework for C - Version 2.1-3 00:24:46.002 http://cunit.sourceforge.net/ 00:24:46.002 00:24:46.002 00:24:46.002 Suite: bdevio tests on: nvme3n1 00:24:46.002 Test: blockdev write read block ...passed 00:24:46.002 Test: blockdev write zeroes read block ...passed 00:24:46.002 Test: blockdev write zeroes read no split ...passed 00:24:46.002 Test: blockdev write zeroes read split ...passed 00:24:46.002 Test: blockdev write zeroes read split partial ...passed 00:24:46.002 Test: blockdev reset ...passed 00:24:46.002 Test: blockdev write read 8 blocks ...passed 00:24:46.002 Test: blockdev write read size > 128k ...passed 00:24:46.002 Test: blockdev write read invalid size ...passed 00:24:46.002 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:46.002 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:46.002 Test: blockdev write read max offset ...passed 00:24:46.002 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:46.002 Test: blockdev writev readv 8 blocks ...passed 00:24:46.002 Test: blockdev writev readv 30 x 1block ...passed 00:24:46.002 Test: blockdev writev readv block ...passed 00:24:46.002 Test: blockdev writev readv size > 128k ...passed 00:24:46.002 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:46.002 Test: blockdev comparev and writev ...passed 00:24:46.002 Test: blockdev nvme passthru rw ...passed 00:24:46.002 Test: blockdev nvme passthru vendor specific ...passed 00:24:46.002 Test: blockdev nvme admin passthru ...passed 00:24:46.002 Test: blockdev copy ...passed 00:24:46.002 Suite: bdevio tests on: nvme2n3 00:24:46.002 Test: blockdev write read block ...passed 00:24:46.002 Test: blockdev write zeroes read block ...passed 00:24:46.002 Test: blockdev write zeroes read no split ...passed 00:24:46.002 Test: blockdev write zeroes read split ...passed 00:24:46.002 Test: blockdev write zeroes read split partial ...passed 00:24:46.002 Test: blockdev reset ...passed 00:24:46.002 Test: blockdev write read 8 blocks ...passed 00:24:46.002 Test: blockdev write read size > 128k ...passed 00:24:46.002 Test: blockdev write read invalid size ...passed 00:24:46.002 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:46.002 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:46.002 Test: blockdev write read max offset ...passed 00:24:46.002 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:46.002 Test: blockdev writev readv 8 blocks ...passed 00:24:46.002 Test: blockdev writev readv 30 x 1block ...passed 00:24:46.002 Test: blockdev writev readv block ...passed 00:24:46.002 Test: blockdev writev readv size > 128k ...passed 00:24:46.002 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:46.002 Test: blockdev comparev and writev ...passed 00:24:46.002 Test: blockdev nvme passthru rw ...passed 00:24:46.002 Test: blockdev nvme passthru vendor specific ...passed 00:24:46.002 Test: blockdev nvme admin passthru ...passed 00:24:46.002 Test: blockdev copy ...passed 00:24:46.002 Suite: bdevio tests on: nvme2n2 00:24:46.002 Test: blockdev write read block ...passed 00:24:46.002 Test: blockdev write zeroes read block ...passed 00:24:46.002 Test: blockdev write zeroes read no split ...passed 00:24:46.261 Test: blockdev write zeroes read split ...passed 00:24:46.261 Test: blockdev write zeroes read split partial ...passed 00:24:46.261 Test: blockdev reset ...passed 00:24:46.261 Test: blockdev write read 8 blocks ...passed 00:24:46.261 Test: blockdev write read size > 128k ...passed 00:24:46.261 Test: blockdev write read invalid size ...passed 00:24:46.261 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:46.261 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:46.261 Test: blockdev write read max offset ...passed 00:24:46.261 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:46.261 Test: blockdev writev readv 8 blocks ...passed 00:24:46.261 Test: blockdev writev readv 30 x 1block ...passed 00:24:46.261 Test: blockdev writev readv block ...passed 00:24:46.261 Test: blockdev writev readv size > 128k ...passed 00:24:46.261 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:46.261 Test: blockdev comparev and writev ...passed 00:24:46.261 Test: blockdev nvme passthru rw ...passed 00:24:46.261 Test: blockdev nvme passthru vendor specific ...passed 00:24:46.261 Test: blockdev nvme admin passthru ...passed 00:24:46.261 Test: blockdev copy ...passed 00:24:46.261 Suite: bdevio tests on: nvme2n1 00:24:46.261 Test: blockdev write read block ...passed 00:24:46.261 Test: blockdev write zeroes read block ...passed 00:24:46.261 Test: blockdev write zeroes read no split ...passed 00:24:46.261 Test: blockdev write zeroes read split ...passed 00:24:46.261 Test: blockdev write zeroes read split partial ...passed 00:24:46.261 Test: blockdev reset ...passed 00:24:46.261 Test: blockdev write read 8 blocks ...passed 00:24:46.261 Test: blockdev write read size > 128k ...passed 00:24:46.261 Test: blockdev write read invalid size ...passed 00:24:46.261 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:46.261 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:46.261 Test: blockdev write read max offset ...passed 00:24:46.261 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:46.261 Test: blockdev writev readv 8 blocks ...passed 00:24:46.261 Test: blockdev writev readv 30 x 1block ...passed 00:24:46.261 Test: blockdev writev readv block ...passed 00:24:46.261 Test: blockdev writev readv size > 128k ...passed 00:24:46.261 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:46.261 Test: blockdev comparev and writev ...passed 00:24:46.261 Test: blockdev nvme passthru rw ...passed 00:24:46.261 Test: blockdev nvme passthru vendor specific ...passed 00:24:46.261 Test: blockdev nvme admin passthru ...passed 00:24:46.261 Test: blockdev copy ...passed 00:24:46.261 Suite: bdevio tests on: nvme1n1 00:24:46.261 Test: blockdev write read block ...passed 00:24:46.261 Test: blockdev write zeroes read block ...passed 00:24:46.261 Test: blockdev write zeroes read no split ...passed 00:24:46.261 Test: blockdev write zeroes read split ...passed 00:24:46.261 Test: blockdev write zeroes read split partial ...passed 00:24:46.261 Test: blockdev reset ...passed 00:24:46.261 Test: blockdev write read 8 blocks ...passed 00:24:46.261 Test: blockdev write read size > 128k ...passed 00:24:46.261 Test: blockdev write read invalid size ...passed 00:24:46.261 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:46.261 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:46.261 Test: blockdev write read max offset ...passed 00:24:46.261 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:46.261 Test: blockdev writev readv 8 blocks ...passed 00:24:46.261 Test: blockdev writev readv 30 x 1block ...passed 00:24:46.261 Test: blockdev writev readv block ...passed 00:24:46.261 Test: blockdev writev readv size > 128k ...passed 00:24:46.261 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:46.261 Test: blockdev comparev and writev ...passed 00:24:46.261 Test: blockdev nvme passthru rw ...passed 00:24:46.261 Test: blockdev nvme passthru vendor specific ...passed 00:24:46.261 Test: blockdev nvme admin passthru ...passed 00:24:46.261 Test: blockdev copy ...passed 00:24:46.261 Suite: bdevio tests on: nvme0n1 00:24:46.261 Test: blockdev write read block ...passed 00:24:46.261 Test: blockdev write zeroes read block ...passed 00:24:46.261 Test: blockdev write zeroes read no split ...passed 00:24:46.261 Test: blockdev write zeroes read split ...passed 00:24:46.519 Test: blockdev write zeroes read split partial ...passed 00:24:46.519 Test: blockdev reset ...passed 00:24:46.519 Test: blockdev write read 8 blocks ...passed 00:24:46.519 Test: blockdev write read size > 128k ...passed 00:24:46.519 Test: blockdev write read invalid size ...passed 00:24:46.519 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:46.519 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:46.519 Test: blockdev write read max offset ...passed 00:24:46.519 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:46.519 Test: blockdev writev readv 8 blocks ...passed 00:24:46.519 Test: blockdev writev readv 30 x 1block ...passed 00:24:46.519 Test: blockdev writev readv block ...passed 00:24:46.519 Test: blockdev writev readv size > 128k ...passed 00:24:46.519 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:46.519 Test: blockdev comparev and writev ...passed 00:24:46.519 Test: blockdev nvme passthru rw ...passed 00:24:46.519 Test: blockdev nvme passthru vendor specific ...passed 00:24:46.519 Test: blockdev nvme admin passthru ...passed 00:24:46.519 Test: blockdev copy ...passed 00:24:46.519 00:24:46.519 Run Summary: Type Total Ran Passed Failed Inactive 00:24:46.519 suites 6 6 n/a 0 0 00:24:46.519 tests 138 138 138 0 0 00:24:46.519 asserts 780 780 780 0 n/a 00:24:46.519 00:24:46.519 Elapsed time = 1.342 seconds 00:24:46.519 0 00:24:46.519 12:26:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 76841 00:24:46.519 12:26:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 76841 ']' 00:24:46.519 12:26:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 76841 00:24:46.519 12:26:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:24:46.519 12:26:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:46.519 12:26:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76841 00:24:46.519 12:26:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:46.519 12:26:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:46.519 killing process with pid 76841 00:24:46.519 12:26:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76841' 00:24:46.519 12:26:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 76841 00:24:46.519 12:26:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 76841 00:24:47.921 12:26:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:24:47.921 00:24:47.921 real 0m3.053s 00:24:47.921 user 0m6.981s 00:24:47.921 sys 0m0.449s 00:24:47.921 12:26:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:47.921 12:26:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:47.921 ************************************ 00:24:47.921 END TEST bdev_bounds 00:24:47.921 ************************************ 00:24:47.921 12:26:57 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:24:47.921 12:26:57 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:24:47.921 12:26:57 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:47.921 12:26:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:47.921 12:26:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:47.921 ************************************ 00:24:47.921 START TEST bdev_nbd 00:24:47.921 ************************************ 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=76910 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 76910 /var/tmp/spdk-nbd.sock 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 76910 ']' 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:47.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:47.921 12:26:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:47.921 [2024-07-10 12:26:57.344478] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:24:47.921 [2024-07-10 12:26:57.344620] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.181 [2024-07-10 12:26:57.517165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.439 [2024-07-10 12:26:57.766416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:49.005 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:24:49.262 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.263 1+0 records in 00:24:49.263 1+0 records out 00:24:49.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000818951 s, 5.0 MB/s 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:49.263 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.521 1+0 records in 00:24:49.521 1+0 records out 00:24:49.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000748052 s, 5.5 MB/s 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.521 1+0 records in 00:24:49.521 1+0 records out 00:24:49.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879813 s, 4.7 MB/s 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:49.521 12:26:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.780 1+0 records in 00:24:49.780 1+0 records out 00:24:49.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636279 s, 6.4 MB/s 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:49.780 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:50.039 1+0 records in 00:24:50.039 1+0 records out 00:24:50.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692156 s, 5.9 MB/s 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:24:50.039 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:50.297 1+0 records in 00:24:50.297 1+0 records out 00:24:50.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767281 s, 5.3 MB/s 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:50.297 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:50.556 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:24:50.556 { 00:24:50.556 "nbd_device": "/dev/nbd0", 00:24:50.556 "bdev_name": "nvme0n1" 00:24:50.556 }, 00:24:50.556 { 00:24:50.556 "nbd_device": "/dev/nbd1", 00:24:50.556 "bdev_name": "nvme1n1" 00:24:50.556 }, 00:24:50.556 { 00:24:50.556 "nbd_device": "/dev/nbd2", 00:24:50.556 "bdev_name": "nvme2n1" 00:24:50.556 }, 00:24:50.556 { 00:24:50.556 "nbd_device": "/dev/nbd3", 00:24:50.556 "bdev_name": "nvme2n2" 00:24:50.556 }, 00:24:50.556 { 00:24:50.556 "nbd_device": "/dev/nbd4", 00:24:50.556 "bdev_name": "nvme2n3" 00:24:50.556 }, 00:24:50.556 { 00:24:50.556 "nbd_device": "/dev/nbd5", 00:24:50.556 "bdev_name": "nvme3n1" 00:24:50.556 } 00:24:50.556 ]' 00:24:50.556 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:24:50.556 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:24:50.556 { 00:24:50.556 "nbd_device": "/dev/nbd0", 00:24:50.556 "bdev_name": "nvme0n1" 00:24:50.556 }, 00:24:50.556 { 00:24:50.556 "nbd_device": "/dev/nbd1", 00:24:50.556 "bdev_name": "nvme1n1" 00:24:50.556 }, 00:24:50.556 { 00:24:50.556 "nbd_device": "/dev/nbd2", 00:24:50.556 "bdev_name": "nvme2n1" 00:24:50.556 }, 00:24:50.556 { 00:24:50.556 "nbd_device": "/dev/nbd3", 00:24:50.556 "bdev_name": "nvme2n2" 00:24:50.556 }, 00:24:50.556 { 00:24:50.556 "nbd_device": "/dev/nbd4", 00:24:50.556 "bdev_name": "nvme2n3" 00:24:50.556 }, 00:24:50.556 { 00:24:50.556 "nbd_device": "/dev/nbd5", 00:24:50.556 "bdev_name": "nvme3n1" 00:24:50.556 } 00:24:50.556 ]' 00:24:50.556 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:24:50.556 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:24:50.556 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:50.556 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:24:50.556 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:50.556 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:50.556 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.556 12:26:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:50.814 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:50.814 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:50.814 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:50.814 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.814 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.814 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:50.814 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:50.814 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.814 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.814 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:51.073 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:24:51.332 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:24:51.332 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:24:51.332 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:24:51.332 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:51.332 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:51.332 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:24:51.332 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:51.332 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:51.332 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:51.332 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:24:51.590 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:24:51.590 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:24:51.590 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:24:51.590 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:51.590 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:51.590 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:24:51.590 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:51.590 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:51.590 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:51.590 12:27:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:24:51.848 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:24:51.848 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:24:51.848 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:24:51.848 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:51.848 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:51.848 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:24:51.848 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:51.848 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:51.848 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:51.848 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:51.848 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:52.105 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:24:52.362 /dev/nbd0 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:52.362 1+0 records in 00:24:52.362 1+0 records out 00:24:52.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442284 s, 9.3 MB/s 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:52.362 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:24:52.621 /dev/nbd1 00:24:52.621 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:52.621 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:52.621 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:52.621 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:24:52.621 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:52.621 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:52.622 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:52.622 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:24:52.622 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:52.622 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:52.622 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:52.622 1+0 records in 00:24:52.622 1+0 records out 00:24:52.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701977 s, 5.8 MB/s 00:24:52.622 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.622 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:24:52.622 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.622 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:52.622 12:27:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:24:52.622 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:52.622 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:52.622 12:27:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:24:52.622 /dev/nbd10 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:52.879 1+0 records in 00:24:52.879 1+0 records out 00:24:52.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585481 s, 7.0 MB/s 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:52.879 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:24:52.879 /dev/nbd11 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:53.137 1+0 records in 00:24:53.137 1+0 records out 00:24:53.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571677 s, 7.2 MB/s 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:24:53.137 /dev/nbd12 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:24:53.137 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:24:53.138 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:24:53.138 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:24:53.138 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:53.138 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:53.138 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:24:53.138 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:24:53.138 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:53.138 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:53.138 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:53.138 1+0 records in 00:24:53.138 1+0 records out 00:24:53.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654317 s, 6.3 MB/s 00:24:53.138 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:24:53.395 /dev/nbd13 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:53.395 1+0 records in 00:24:53.395 1+0 records out 00:24:53.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00383858 s, 1.1 MB/s 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:53.395 12:27:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:53.658 { 00:24:53.658 "nbd_device": "/dev/nbd0", 00:24:53.658 "bdev_name": "nvme0n1" 00:24:53.658 }, 00:24:53.658 { 00:24:53.658 "nbd_device": "/dev/nbd1", 00:24:53.658 "bdev_name": "nvme1n1" 00:24:53.658 }, 00:24:53.658 { 00:24:53.658 "nbd_device": "/dev/nbd10", 00:24:53.658 "bdev_name": "nvme2n1" 00:24:53.658 }, 00:24:53.658 { 00:24:53.658 "nbd_device": "/dev/nbd11", 00:24:53.658 "bdev_name": "nvme2n2" 00:24:53.658 }, 00:24:53.658 { 00:24:53.658 "nbd_device": "/dev/nbd12", 00:24:53.658 "bdev_name": "nvme2n3" 00:24:53.658 }, 00:24:53.658 { 00:24:53.658 "nbd_device": "/dev/nbd13", 00:24:53.658 "bdev_name": "nvme3n1" 00:24:53.658 } 00:24:53.658 ]' 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:53.658 { 00:24:53.658 "nbd_device": "/dev/nbd0", 00:24:53.658 "bdev_name": "nvme0n1" 00:24:53.658 }, 00:24:53.658 { 00:24:53.658 "nbd_device": "/dev/nbd1", 00:24:53.658 "bdev_name": "nvme1n1" 00:24:53.658 }, 00:24:53.658 { 00:24:53.658 "nbd_device": "/dev/nbd10", 00:24:53.658 "bdev_name": "nvme2n1" 00:24:53.658 }, 00:24:53.658 { 00:24:53.658 "nbd_device": "/dev/nbd11", 00:24:53.658 "bdev_name": "nvme2n2" 00:24:53.658 }, 00:24:53.658 { 00:24:53.658 "nbd_device": "/dev/nbd12", 00:24:53.658 "bdev_name": "nvme2n3" 00:24:53.658 }, 00:24:53.658 { 00:24:53.658 "nbd_device": "/dev/nbd13", 00:24:53.658 "bdev_name": "nvme3n1" 00:24:53.658 } 00:24:53.658 ]' 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:53.658 /dev/nbd1 00:24:53.658 /dev/nbd10 00:24:53.658 /dev/nbd11 00:24:53.658 /dev/nbd12 00:24:53.658 /dev/nbd13' 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:53.658 /dev/nbd1 00:24:53.658 /dev/nbd10 00:24:53.658 /dev/nbd11 00:24:53.658 /dev/nbd12 00:24:53.658 /dev/nbd13' 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:24:53.658 256+0 records in 00:24:53.658 256+0 records out 00:24:53.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102798 s, 102 MB/s 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:53.658 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:53.914 256+0 records in 00:24:53.914 256+0 records out 00:24:53.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114583 s, 9.2 MB/s 00:24:53.914 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:53.914 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:53.914 256+0 records in 00:24:53.914 256+0 records out 00:24:53.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140761 s, 7.4 MB/s 00:24:53.914 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:53.914 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:24:54.195 256+0 records in 00:24:54.195 256+0 records out 00:24:54.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124769 s, 8.4 MB/s 00:24:54.195 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:54.195 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:24:54.195 256+0 records in 00:24:54.195 256+0 records out 00:24:54.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114711 s, 9.1 MB/s 00:24:54.195 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:54.195 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:24:54.456 256+0 records in 00:24:54.456 256+0 records out 00:24:54.456 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116697 s, 9.0 MB/s 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:24:54.456 256+0 records in 00:24:54.456 256+0 records out 00:24:54.456 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117864 s, 8.9 MB/s 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:24:54.456 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:54.714 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:24:54.714 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:54.714 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:24:54.714 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:54.714 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:54.714 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:54.714 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:54.714 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:54.714 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:54.714 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:54.714 12:27:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:54.714 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:54.714 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:54.714 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:54.714 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:54.714 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:54.714 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:54.714 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:54.714 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:54.714 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:54.714 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:54.972 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:54.972 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:54.972 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:54.972 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:54.972 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:54.972 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:54.973 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:54.973 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:54.973 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:54.973 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:24:55.230 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:24:55.230 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:24:55.230 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:24:55.230 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:55.230 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:55.230 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:24:55.230 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:55.230 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:55.230 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:55.230 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:24:55.488 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:24:55.488 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:24:55.488 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:24:55.488 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:55.488 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:55.488 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:24:55.488 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:55.488 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:55.488 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:55.488 12:27:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:24:55.747 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:24:55.747 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:24:55.747 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:24:55.747 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:55.747 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:55.747 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:24:55.747 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:55.747 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:55.747 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:55.747 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:56.005 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:24:56.262 malloc_lvol_verify 00:24:56.262 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:24:56.520 8fec5c28-0a6e-46b0-b05c-2c648dd9b3b0 00:24:56.520 12:27:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:24:56.777 b9e71c88-beb2-45b7-839a-9b3d815b172f 00:24:56.777 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:24:56.777 /dev/nbd0 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:24:57.036 mke2fs 1.46.5 (30-Dec-2021) 00:24:57.036 Discarding device blocks: 0/4096 done 00:24:57.036 Creating filesystem with 4096 1k blocks and 1024 inodes 00:24:57.036 00:24:57.036 Allocating group tables: 0/1 done 00:24:57.036 Writing inode tables: 0/1 done 00:24:57.036 Creating journal (1024 blocks): done 00:24:57.036 Writing superblocks and filesystem accounting information: 0/1 done 00:24:57.036 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 76910 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 76910 ']' 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 76910 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.036 12:27:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76910 00:24:57.294 12:27:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:57.294 12:27:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:57.294 12:27:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76910' 00:24:57.294 killing process with pid 76910 00:24:57.294 12:27:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 76910 00:24:57.294 12:27:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 76910 00:24:58.670 ************************************ 00:24:58.670 END TEST bdev_nbd 00:24:58.670 ************************************ 00:24:58.670 12:27:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:24:58.670 00:24:58.670 real 0m10.705s 00:24:58.670 user 0m13.660s 00:24:58.670 sys 0m4.367s 00:24:58.670 12:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:58.670 12:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:58.670 12:27:08 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:24:58.670 12:27:08 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:24:58.670 12:27:08 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:24:58.670 12:27:08 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:24:58.670 12:27:08 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:24:58.670 12:27:08 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:58.670 12:27:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:58.670 12:27:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:58.670 ************************************ 00:24:58.670 START TEST bdev_fio 00:24:58.670 ************************************ 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:24:58.670 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:24:58.670 ************************************ 00:24:58.670 START TEST bdev_fio_rw_verify 00:24:58.670 ************************************ 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:58.670 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:24:58.928 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:58.928 12:27:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:58.928 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:58.928 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:58.928 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:58.928 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:58.929 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:58.929 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:58.929 fio-3.35 00:24:58.929 Starting 6 threads 00:25:11.124 00:25:11.124 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=77322: Wed Jul 10 12:27:19 2024 00:25:11.124 read: IOPS=33.8k, BW=132MiB/s (139MB/s)(1321MiB/10001msec) 00:25:11.124 slat (usec): min=2, max=487, avg= 6.49, stdev= 4.00 00:25:11.124 clat (usec): min=109, max=3949, avg=572.98, stdev=181.10 00:25:11.124 lat (usec): min=112, max=3961, avg=579.47, stdev=181.87 00:25:11.124 clat percentiles (usec): 00:25:11.124 | 50.000th=[ 611], 99.000th=[ 1029], 99.900th=[ 1598], 99.990th=[ 3490], 00:25:11.124 | 99.999th=[ 3752] 00:25:11.124 write: IOPS=34.0k, BW=133MiB/s (139MB/s)(1330MiB/10001msec); 0 zone resets 00:25:11.124 slat (usec): min=10, max=1620, avg=20.11, stdev=21.95 00:25:11.124 clat (usec): min=74, max=5072, avg=633.33, stdev=181.78 00:25:11.124 lat (usec): min=87, max=5088, avg=653.44, stdev=184.20 00:25:11.124 clat percentiles (usec): 00:25:11.124 | 50.000th=[ 644], 99.000th=[ 1172], 99.900th=[ 1647], 99.990th=[ 2343], 00:25:11.124 | 99.999th=[ 5014] 00:25:11.124 bw ( KiB/s): min=112605, max=155046, per=100.00%, avg=136808.79, stdev=1989.62, samples=114 00:25:11.124 iops : min=28151, max=38761, avg=34201.84, stdev=497.40, samples=114 00:25:11.124 lat (usec) : 100=0.01%, 250=4.29%, 500=18.44%, 750=64.35%, 1000=11.06% 00:25:11.124 lat (msec) : 2=1.80%, 4=0.05%, 10=0.01% 00:25:11.124 cpu : usr=62.49%, sys=26.47%, ctx=8186, majf=0, minf=27928 00:25:11.124 IO depths : 1=12.1%, 2=24.6%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:11.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:11.124 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:11.124 issued rwts: total=338225,340418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:11.124 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:11.124 00:25:11.124 Run status group 0 (all jobs): 00:25:11.124 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=1321MiB (1385MB), run=10001-10001msec 00:25:11.124 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=1330MiB (1394MB), run=10001-10001msec 00:25:11.384 ----------------------------------------------------- 00:25:11.384 Suppressions used: 00:25:11.384 count bytes template 00:25:11.384 6 48 /usr/src/fio/parse.c 00:25:11.384 1946 186816 /usr/src/fio/iolog.c 00:25:11.384 1 8 libtcmalloc_minimal.so 00:25:11.384 1 904 libcrypto.so 00:25:11.384 ----------------------------------------------------- 00:25:11.384 00:25:11.384 00:25:11.384 real 0m12.548s 00:25:11.384 user 0m39.563s 00:25:11.384 sys 0m16.271s 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:25:11.384 ************************************ 00:25:11.384 END TEST bdev_fio_rw_verify 00:25:11.384 ************************************ 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "58fe55b2-0eb5-4e07-b893-ad9552d69434"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "58fe55b2-0eb5-4e07-b893-ad9552d69434",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "41b7c466-15b3-48fc-b3bc-c005b5bd24ee"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "41b7c466-15b3-48fc-b3bc-c005b5bd24ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4c973795-d0cb-4f5e-9860-ac40dbcb2da4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4c973795-d0cb-4f5e-9860-ac40dbcb2da4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "c2601fa2-4b72-43c2-ac7a-86c996d0968a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c2601fa2-4b72-43c2-ac7a-86c996d0968a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "fa72f8f8-9f61-4443-ac7a-c3c19771c15c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fa72f8f8-9f61-4443-ac7a-c3c19771c15c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6b464291-9b60-4fe9-b90b-60c3db3f34a2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6b464291-9b60-4fe9-b90b-60c3db3f34a2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:11.384 /home/vagrant/spdk_repo/spdk 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:25:11.384 00:25:11.384 real 0m12.760s 00:25:11.384 user 0m39.671s 00:25:11.384 sys 0m16.382s 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:11.384 ************************************ 00:25:11.384 END TEST bdev_fio 00:25:11.384 12:27:20 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:25:11.384 ************************************ 00:25:11.384 12:27:20 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:25:11.384 12:27:20 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:11.384 12:27:20 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:11.384 12:27:20 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:25:11.384 12:27:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:11.385 12:27:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:11.385 ************************************ 00:25:11.385 START TEST bdev_verify 00:25:11.385 ************************************ 00:25:11.385 12:27:20 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:11.643 [2024-07-10 12:27:20.943408] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:25:11.644 [2024-07-10 12:27:20.943543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77492 ] 00:25:11.644 [2024-07-10 12:27:21.115482] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:11.902 [2024-07-10 12:27:21.360344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.903 [2024-07-10 12:27:21.360381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.469 Running I/O for 5 seconds... 00:25:17.759 00:25:17.759 Latency(us) 00:25:17.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.759 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:17.759 Verification LBA range: start 0x0 length 0xa0000 00:25:17.759 nvme0n1 : 5.06 1720.18 6.72 0.00 0.00 74289.51 14107.35 67378.38 00:25:17.759 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:17.759 Verification LBA range: start 0xa0000 length 0xa0000 00:25:17.759 nvme0n1 : 5.04 1879.84 7.34 0.00 0.00 67976.26 11370.10 57271.62 00:25:17.759 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:17.759 Verification LBA range: start 0x0 length 0xbd0bd 00:25:17.759 nvme1n1 : 5.07 2583.78 10.09 0.00 0.00 49207.88 7001.03 56008.28 00:25:17.759 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:17.759 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:25:17.759 nvme1n1 : 5.05 2921.03 11.41 0.00 0.00 43657.21 6211.44 52428.80 00:25:17.759 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:17.759 Verification LBA range: start 0x0 length 0x80000 00:25:17.759 nvme2n1 : 5.07 1743.12 6.81 0.00 0.00 73109.48 8948.69 64430.57 00:25:17.759 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:17.759 Verification LBA range: start 0x80000 length 0x80000 00:25:17.759 nvme2n1 : 5.07 1919.71 7.50 0.00 0.00 66182.09 10475.23 63167.23 00:25:17.759 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:17.759 Verification LBA range: start 0x0 length 0x80000 00:25:17.760 nvme2n2 : 5.07 1741.32 6.80 0.00 0.00 73033.14 8896.05 73273.99 00:25:17.760 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:17.760 Verification LBA range: start 0x80000 length 0x80000 00:25:17.760 nvme2n2 : 5.07 1891.80 7.39 0.00 0.00 67026.04 4369.07 55166.05 00:25:17.760 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:17.760 Verification LBA range: start 0x0 length 0x80000 00:25:17.760 nvme2n3 : 5.06 1718.75 6.71 0.00 0.00 73886.48 11370.10 73273.99 00:25:17.760 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:17.760 Verification LBA range: start 0x80000 length 0x80000 00:25:17.760 nvme2n3 : 5.08 1890.99 7.39 0.00 0.00 66981.04 5132.34 59377.20 00:25:17.760 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:17.760 Verification LBA range: start 0x0 length 0x20000 00:25:17.760 nvme3n1 : 5.08 1739.69 6.80 0.00 0.00 72884.00 4316.43 68641.72 00:25:17.760 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:17.760 Verification LBA range: start 0x20000 length 0x20000 00:25:17.760 nvme3n1 : 5.08 1890.03 7.38 0.00 0.00 66986.83 7001.03 62746.11 00:25:17.760 =================================================================================================================== 00:25:17.760 Total : 23640.25 92.34 0.00 0.00 64549.90 4316.43 73273.99 00:25:19.151 00:25:19.151 real 0m7.490s 00:25:19.151 user 0m11.352s 00:25:19.151 sys 0m1.966s 00:25:19.151 12:27:28 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:19.151 12:27:28 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:25:19.151 ************************************ 00:25:19.151 END TEST bdev_verify 00:25:19.151 ************************************ 00:25:19.151 12:27:28 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:25:19.151 12:27:28 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:19.151 12:27:28 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:25:19.151 12:27:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.151 12:27:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:19.151 ************************************ 00:25:19.151 START TEST bdev_verify_big_io 00:25:19.151 ************************************ 00:25:19.151 12:27:28 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:19.151 [2024-07-10 12:27:28.503355] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:25:19.151 [2024-07-10 12:27:28.503484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77597 ] 00:25:19.410 [2024-07-10 12:27:28.682013] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:19.668 [2024-07-10 12:27:28.919433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.668 [2024-07-10 12:27:28.919484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.237 Running I/O for 5 seconds... 00:25:26.841 00:25:26.841 Latency(us) 00:25:26.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.841 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:26.841 Verification LBA range: start 0x0 length 0xa000 00:25:26.841 nvme0n1 : 5.74 153.27 9.58 0.00 0.00 817081.98 96435.30 1536227.01 00:25:26.841 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:26.841 Verification LBA range: start 0xa000 length 0xa000 00:25:26.841 nvme0n1 : 5.70 134.63 8.41 0.00 0.00 853245.60 148232.43 1738362.14 00:25:26.841 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:26.841 Verification LBA range: start 0x0 length 0xbd0b 00:25:26.841 nvme1n1 : 5.75 186.35 11.65 0.00 0.00 657576.25 15897.09 1367781.06 00:25:26.841 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:26.841 Verification LBA range: start 0xbd0b length 0xbd0b 00:25:26.841 nvme1n1 : 5.72 254.51 15.91 0.00 0.00 447743.43 14423.18 656939.18 00:25:26.841 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:26.841 Verification LBA range: start 0x0 length 0x8000 00:25:26.841 nvme2n1 : 5.76 122.54 7.66 0.00 0.00 976120.46 12264.97 2385194.56 00:25:26.841 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:26.841 Verification LBA range: start 0x8000 length 0x8000 00:25:26.841 nvme2n1 : 5.72 187.30 11.71 0.00 0.00 598444.27 4816.50 1455372.95 00:25:26.841 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:26.841 Verification LBA range: start 0x0 length 0x8000 00:25:26.841 nvme2n2 : 5.75 167.04 10.44 0.00 0.00 693625.30 91381.92 1387994.58 00:25:26.841 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:26.841 Verification LBA range: start 0x8000 length 0x8000 00:25:26.841 nvme2n2 : 5.52 202.73 12.67 0.00 0.00 605708.68 91803.04 1286927.01 00:25:26.841 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:26.841 Verification LBA range: start 0x0 length 0x8000 00:25:26.841 nvme2n3 : 5.76 156.96 9.81 0.00 0.00 726892.81 9369.81 1765313.49 00:25:26.841 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:26.841 Verification LBA range: start 0x8000 length 0x8000 00:25:26.841 nvme2n3 : 5.64 238.49 14.91 0.00 0.00 504802.51 70747.30 542395.94 00:25:26.841 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:26.841 Verification LBA range: start 0x0 length 0x2000 00:25:26.841 nvme3n1 : 5.76 155.67 9.73 0.00 0.00 717006.74 4737.54 1441897.28 00:25:26.841 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:26.841 Verification LBA range: start 0x2000 length 0x2000 00:25:26.841 nvme3n1 : 5.72 187.54 11.72 0.00 0.00 634065.81 77485.13 1125218.90 00:25:26.841 =================================================================================================================== 00:25:26.841 Total : 2147.03 134.19 0.00 0.00 658271.88 4737.54 2385194.56 00:25:27.778 00:25:27.778 real 0m8.502s 00:25:27.778 user 0m15.052s 00:25:27.778 sys 0m0.650s 00:25:27.778 12:27:36 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:27.778 12:27:36 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:25:27.778 ************************************ 00:25:27.778 END TEST bdev_verify_big_io 00:25:27.778 ************************************ 00:25:27.778 12:27:36 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:25:27.778 12:27:36 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:27.778 12:27:36 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:25:27.778 12:27:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.778 12:27:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:27.778 ************************************ 00:25:27.778 START TEST bdev_write_zeroes 00:25:27.778 ************************************ 00:25:27.778 12:27:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:27.778 [2024-07-10 12:27:37.081954] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:25:27.779 [2024-07-10 12:27:37.082100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77712 ] 00:25:27.779 [2024-07-10 12:27:37.254152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.038 [2024-07-10 12:27:37.499246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.606 Running I/O for 1 seconds... 00:25:29.983 00:25:29.983 Latency(us) 00:25:29.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.983 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:29.983 nvme0n1 : 1.00 8034.03 31.38 0.00 0.00 15916.01 7211.59 28425.25 00:25:29.983 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:29.983 nvme1n1 : 1.01 11466.66 44.79 0.00 0.00 11128.49 5658.73 20424.07 00:25:29.983 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:29.983 nvme2n1 : 1.02 8060.84 31.49 0.00 0.00 15745.59 6658.88 28004.14 00:25:29.983 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:29.983 nvme2n2 : 1.02 8049.50 31.44 0.00 0.00 15758.43 7053.67 28425.25 00:25:29.983 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:29.983 nvme2n3 : 1.02 8038.19 31.40 0.00 0.00 15768.04 7422.15 28635.81 00:25:29.983 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:29.983 nvme3n1 : 1.02 8027.08 31.36 0.00 0.00 15779.99 7422.15 28846.37 00:25:29.983 =================================================================================================================== 00:25:29.983 Total : 51676.30 201.86 0.00 0.00 14760.10 5658.73 28846.37 00:25:30.942 00:25:30.942 real 0m3.352s 00:25:30.942 user 0m2.530s 00:25:30.942 sys 0m0.636s 00:25:30.942 ************************************ 00:25:30.942 END TEST bdev_write_zeroes 00:25:30.942 ************************************ 00:25:30.942 12:27:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:30.942 12:27:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:25:30.942 12:27:40 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:25:30.943 12:27:40 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:30.943 12:27:40 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:25:30.943 12:27:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.943 12:27:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:30.943 ************************************ 00:25:30.943 START TEST bdev_json_nonenclosed 00:25:30.943 ************************************ 00:25:30.943 12:27:40 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:31.200 [2024-07-10 12:27:40.511175] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:25:31.200 [2024-07-10 12:27:40.511544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77771 ] 00:25:31.458 [2024-07-10 12:27:40.686293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.458 [2024-07-10 12:27:40.929485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.458 [2024-07-10 12:27:40.929605] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:25:31.458 [2024-07-10 12:27:40.929634] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:31.458 [2024-07-10 12:27:40.929651] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:32.025 00:25:32.025 real 0m0.983s 00:25:32.025 user 0m0.711s 00:25:32.025 sys 0m0.165s 00:25:32.025 12:27:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:25:32.025 12:27:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:32.025 ************************************ 00:25:32.025 12:27:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:25:32.025 END TEST bdev_json_nonenclosed 00:25:32.025 ************************************ 00:25:32.025 12:27:41 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:25:32.025 12:27:41 blockdev_xnvme -- bdev/blockdev.sh@782 -- # true 00:25:32.025 12:27:41 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:32.025 12:27:41 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:25:32.025 12:27:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:32.025 12:27:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:32.025 ************************************ 00:25:32.025 START TEST bdev_json_nonarray 00:25:32.025 ************************************ 00:25:32.026 12:27:41 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:32.284 [2024-07-10 12:27:41.537963] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:25:32.284 [2024-07-10 12:27:41.538092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77802 ] 00:25:32.284 [2024-07-10 12:27:41.709988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.543 [2024-07-10 12:27:41.956237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.543 [2024-07-10 12:27:41.956350] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:25:32.543 [2024-07-10 12:27:41.956371] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:32.543 [2024-07-10 12:27:41.956387] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:33.109 00:25:33.109 real 0m0.964s 00:25:33.109 user 0m0.706s 00:25:33.109 sys 0m0.152s 00:25:33.109 12:27:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:25:33.109 12:27:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:33.109 ************************************ 00:25:33.109 END TEST bdev_json_nonarray 00:25:33.109 ************************************ 00:25:33.109 12:27:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:25:33.109 12:27:42 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:25:33.109 12:27:42 blockdev_xnvme -- bdev/blockdev.sh@785 -- # true 00:25:33.109 12:27:42 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:25:33.109 12:27:42 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:25:33.109 12:27:42 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:25:33.109 12:27:42 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:25:33.109 12:27:42 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:25:33.109 12:27:42 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:25:33.109 12:27:42 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:33.109 12:27:42 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:25:33.109 12:27:42 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:25:33.109 12:27:42 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:25:33.109 12:27:42 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:25:33.109 12:27:42 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:34.043 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:40.616 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:40.616 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:40.616 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:40.616 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:40.616 00:25:40.616 real 1m8.893s 00:25:40.616 user 1m44.662s 00:25:40.616 sys 0m39.670s 00:25:40.616 12:27:49 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:40.616 ************************************ 00:25:40.616 END TEST blockdev_xnvme 00:25:40.616 ************************************ 00:25:40.616 12:27:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:40.616 12:27:49 -- common/autotest_common.sh@1142 -- # return 0 00:25:40.616 12:27:49 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:25:40.616 12:27:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:40.616 12:27:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.616 12:27:49 -- common/autotest_common.sh@10 -- # set +x 00:25:40.616 ************************************ 00:25:40.616 START TEST ublk 00:25:40.616 ************************************ 00:25:40.616 12:27:49 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:25:40.616 * Looking for test storage... 00:25:40.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:25:40.616 12:27:50 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:25:40.616 12:27:50 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:25:40.616 12:27:50 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:25:40.874 12:27:50 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:25:40.874 12:27:50 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:25:40.874 12:27:50 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:25:40.874 12:27:50 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:25:40.874 12:27:50 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:25:40.874 12:27:50 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:25:40.874 12:27:50 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:25:40.875 12:27:50 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:25:40.875 12:27:50 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:25:40.875 12:27:50 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:25:40.875 12:27:50 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:25:40.875 12:27:50 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:25:40.875 12:27:50 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:25:40.875 12:27:50 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:25:40.875 12:27:50 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:25:40.875 12:27:50 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:25:40.875 12:27:50 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:25:40.875 12:27:50 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:40.875 12:27:50 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.875 12:27:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:25:40.875 ************************************ 00:25:40.875 START TEST test_save_ublk_config 00:25:40.875 ************************************ 00:25:40.875 12:27:50 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:25:40.875 12:27:50 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:25:40.875 12:27:50 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=78094 00:25:40.875 12:27:50 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:25:40.875 12:27:50 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:25:40.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.875 12:27:50 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 78094 00:25:40.875 12:27:50 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 78094 ']' 00:25:40.875 12:27:50 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.875 12:27:50 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:40.875 12:27:50 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.875 12:27:50 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:40.875 12:27:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:40.875 [2024-07-10 12:27:50.244371] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:25:40.875 [2024-07-10 12:27:50.244502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78094 ] 00:25:41.133 [2024-07-10 12:27:50.416637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.392 [2024-07-10 12:27:50.716166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.326 12:27:51 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.326 12:27:51 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:25:42.326 12:27:51 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:25:42.326 12:27:51 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:25:42.326 12:27:51 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.326 12:27:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:42.326 [2024-07-10 12:27:51.759752] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:25:42.326 [2024-07-10 12:27:51.761181] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:25:42.584 malloc0 00:25:42.584 [2024-07-10 12:27:51.856880] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:25:42.584 [2024-07-10 12:27:51.856981] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:25:42.584 [2024-07-10 12:27:51.856994] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:25:42.584 [2024-07-10 12:27:51.857006] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:25:42.584 [2024-07-10 12:27:51.864780] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:42.584 [2024-07-10 12:27:51.864810] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:42.584 [2024-07-10 12:27:51.872758] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:42.584 [2024-07-10 12:27:51.872877] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:25:42.584 [2024-07-10 12:27:51.896777] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:25:42.584 0 00:25:42.585 12:27:51 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.585 12:27:51 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:25:42.585 12:27:51 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.585 12:27:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:42.843 12:27:52 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.843 12:27:52 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:25:42.843 "subsystems": [ 00:25:42.843 { 00:25:42.843 "subsystem": "keyring", 00:25:42.843 "config": [] 00:25:42.843 }, 00:25:42.843 { 00:25:42.843 "subsystem": "iobuf", 00:25:42.843 "config": [ 00:25:42.843 { 00:25:42.843 "method": "iobuf_set_options", 00:25:42.843 "params": { 00:25:42.843 "small_pool_count": 8192, 00:25:42.843 "large_pool_count": 1024, 00:25:42.843 "small_bufsize": 8192, 00:25:42.843 "large_bufsize": 135168 00:25:42.843 } 00:25:42.843 } 00:25:42.843 ] 00:25:42.843 }, 00:25:42.843 { 00:25:42.843 "subsystem": "sock", 00:25:42.843 "config": [ 00:25:42.843 { 00:25:42.843 "method": "sock_set_default_impl", 00:25:42.843 "params": { 00:25:42.843 "impl_name": "posix" 00:25:42.843 } 00:25:42.843 }, 00:25:42.843 { 00:25:42.843 "method": "sock_impl_set_options", 00:25:42.843 "params": { 00:25:42.843 "impl_name": "ssl", 00:25:42.843 "recv_buf_size": 4096, 00:25:42.843 "send_buf_size": 4096, 00:25:42.843 "enable_recv_pipe": true, 00:25:42.843 "enable_quickack": false, 00:25:42.843 "enable_placement_id": 0, 00:25:42.843 "enable_zerocopy_send_server": true, 00:25:42.843 "enable_zerocopy_send_client": false, 00:25:42.843 "zerocopy_threshold": 0, 00:25:42.843 "tls_version": 0, 00:25:42.843 "enable_ktls": false 00:25:42.843 } 00:25:42.843 }, 00:25:42.843 { 00:25:42.843 "method": "sock_impl_set_options", 00:25:42.843 "params": { 00:25:42.843 "impl_name": "posix", 00:25:42.843 "recv_buf_size": 2097152, 00:25:42.843 "send_buf_size": 2097152, 00:25:42.843 "enable_recv_pipe": true, 00:25:42.843 "enable_quickack": false, 00:25:42.843 "enable_placement_id": 0, 00:25:42.843 "enable_zerocopy_send_server": true, 00:25:42.843 "enable_zerocopy_send_client": false, 00:25:42.843 "zerocopy_threshold": 0, 00:25:42.843 "tls_version": 0, 00:25:42.843 "enable_ktls": false 00:25:42.843 } 00:25:42.843 } 00:25:42.843 ] 00:25:42.843 }, 00:25:42.843 { 00:25:42.843 "subsystem": "vmd", 00:25:42.843 "config": [] 00:25:42.843 }, 00:25:42.843 { 00:25:42.843 "subsystem": "accel", 00:25:42.843 "config": [ 00:25:42.843 { 00:25:42.844 "method": "accel_set_options", 00:25:42.844 "params": { 00:25:42.844 "small_cache_size": 128, 00:25:42.844 "large_cache_size": 16, 00:25:42.844 "task_count": 2048, 00:25:42.844 "sequence_count": 2048, 00:25:42.844 "buf_count": 2048 00:25:42.844 } 00:25:42.844 } 00:25:42.844 ] 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "subsystem": "bdev", 00:25:42.844 "config": [ 00:25:42.844 { 00:25:42.844 "method": "bdev_set_options", 00:25:42.844 "params": { 00:25:42.844 "bdev_io_pool_size": 65535, 00:25:42.844 "bdev_io_cache_size": 256, 00:25:42.844 "bdev_auto_examine": true, 00:25:42.844 "iobuf_small_cache_size": 128, 00:25:42.844 "iobuf_large_cache_size": 16 00:25:42.844 } 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "method": "bdev_raid_set_options", 00:25:42.844 "params": { 00:25:42.844 "process_window_size_kb": 1024 00:25:42.844 } 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "method": "bdev_iscsi_set_options", 00:25:42.844 "params": { 00:25:42.844 "timeout_sec": 30 00:25:42.844 } 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "method": "bdev_nvme_set_options", 00:25:42.844 "params": { 00:25:42.844 "action_on_timeout": "none", 00:25:42.844 "timeout_us": 0, 00:25:42.844 "timeout_admin_us": 0, 00:25:42.844 "keep_alive_timeout_ms": 10000, 00:25:42.844 "arbitration_burst": 0, 00:25:42.844 "low_priority_weight": 0, 00:25:42.844 "medium_priority_weight": 0, 00:25:42.844 "high_priority_weight": 0, 00:25:42.844 "nvme_adminq_poll_period_us": 10000, 00:25:42.844 "nvme_ioq_poll_period_us": 0, 00:25:42.844 "io_queue_requests": 0, 00:25:42.844 "delay_cmd_submit": true, 00:25:42.844 "transport_retry_count": 4, 00:25:42.844 "bdev_retry_count": 3, 00:25:42.844 "transport_ack_timeout": 0, 00:25:42.844 "ctrlr_loss_timeout_sec": 0, 00:25:42.844 "reconnect_delay_sec": 0, 00:25:42.844 "fast_io_fail_timeout_sec": 0, 00:25:42.844 "disable_auto_failback": false, 00:25:42.844 "generate_uuids": false, 00:25:42.844 "transport_tos": 0, 00:25:42.844 "nvme_error_stat": false, 00:25:42.844 "rdma_srq_size": 0, 00:25:42.844 "io_path_stat": false, 00:25:42.844 "allow_accel_sequence": false, 00:25:42.844 "rdma_max_cq_size": 0, 00:25:42.844 "rdma_cm_event_timeout_ms": 0, 00:25:42.844 "dhchap_digests": [ 00:25:42.844 "sha256", 00:25:42.844 "sha384", 00:25:42.844 "sha512" 00:25:42.844 ], 00:25:42.844 "dhchap_dhgroups": [ 00:25:42.844 "null", 00:25:42.844 "ffdhe2048", 00:25:42.844 "ffdhe3072", 00:25:42.844 "ffdhe4096", 00:25:42.844 "ffdhe6144", 00:25:42.844 "ffdhe8192" 00:25:42.844 ] 00:25:42.844 } 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "method": "bdev_nvme_set_hotplug", 00:25:42.844 "params": { 00:25:42.844 "period_us": 100000, 00:25:42.844 "enable": false 00:25:42.844 } 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "method": "bdev_malloc_create", 00:25:42.844 "params": { 00:25:42.844 "name": "malloc0", 00:25:42.844 "num_blocks": 8192, 00:25:42.844 "block_size": 4096, 00:25:42.844 "physical_block_size": 4096, 00:25:42.844 "uuid": "ed8b64a0-c1c3-4e1a-95c0-9ebfe7d8ef76", 00:25:42.844 "optimal_io_boundary": 0 00:25:42.844 } 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "method": "bdev_wait_for_examine" 00:25:42.844 } 00:25:42.844 ] 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "subsystem": "scsi", 00:25:42.844 "config": null 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "subsystem": "scheduler", 00:25:42.844 "config": [ 00:25:42.844 { 00:25:42.844 "method": "framework_set_scheduler", 00:25:42.844 "params": { 00:25:42.844 "name": "static" 00:25:42.844 } 00:25:42.844 } 00:25:42.844 ] 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "subsystem": "vhost_scsi", 00:25:42.844 "config": [] 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "subsystem": "vhost_blk", 00:25:42.844 "config": [] 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "subsystem": "ublk", 00:25:42.844 "config": [ 00:25:42.844 { 00:25:42.844 "method": "ublk_create_target", 00:25:42.844 "params": { 00:25:42.844 "cpumask": "1" 00:25:42.844 } 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "method": "ublk_start_disk", 00:25:42.844 "params": { 00:25:42.844 "bdev_name": "malloc0", 00:25:42.844 "ublk_id": 0, 00:25:42.844 "num_queues": 1, 00:25:42.844 "queue_depth": 128 00:25:42.844 } 00:25:42.844 } 00:25:42.844 ] 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "subsystem": "nbd", 00:25:42.844 "config": [] 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "subsystem": "nvmf", 00:25:42.844 "config": [ 00:25:42.844 { 00:25:42.844 "method": "nvmf_set_config", 00:25:42.844 "params": { 00:25:42.844 "discovery_filter": "match_any", 00:25:42.844 "admin_cmd_passthru": { 00:25:42.844 "identify_ctrlr": false 00:25:42.844 } 00:25:42.844 } 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "method": "nvmf_set_max_subsystems", 00:25:42.844 "params": { 00:25:42.844 "max_subsystems": 1024 00:25:42.844 } 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "method": "nvmf_set_crdt", 00:25:42.844 "params": { 00:25:42.844 "crdt1": 0, 00:25:42.844 "crdt2": 0, 00:25:42.844 "crdt3": 0 00:25:42.844 } 00:25:42.844 } 00:25:42.844 ] 00:25:42.844 }, 00:25:42.844 { 00:25:42.844 "subsystem": "iscsi", 00:25:42.844 "config": [ 00:25:42.844 { 00:25:42.844 "method": "iscsi_set_options", 00:25:42.844 "params": { 00:25:42.844 "node_base": "iqn.2016-06.io.spdk", 00:25:42.844 "max_sessions": 128, 00:25:42.844 "max_connections_per_session": 2, 00:25:42.844 "max_queue_depth": 64, 00:25:42.844 "default_time2wait": 2, 00:25:42.844 "default_time2retain": 20, 00:25:42.844 "first_burst_length": 8192, 00:25:42.844 "immediate_data": true, 00:25:42.844 "allow_duplicated_isid": false, 00:25:42.844 "error_recovery_level": 0, 00:25:42.844 "nop_timeout": 60, 00:25:42.844 "nop_in_interval": 30, 00:25:42.844 "disable_chap": false, 00:25:42.844 "require_chap": false, 00:25:42.844 "mutual_chap": false, 00:25:42.844 "chap_group": 0, 00:25:42.844 "max_large_datain_per_connection": 64, 00:25:42.844 "max_r2t_per_connection": 4, 00:25:42.844 "pdu_pool_size": 36864, 00:25:42.844 "immediate_data_pool_size": 16384, 00:25:42.844 "data_out_pool_size": 2048 00:25:42.844 } 00:25:42.844 } 00:25:42.844 ] 00:25:42.844 } 00:25:42.844 ] 00:25:42.844 }' 00:25:42.844 12:27:52 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 78094 00:25:42.844 12:27:52 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 78094 ']' 00:25:42.844 12:27:52 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 78094 00:25:42.844 12:27:52 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:25:42.844 12:27:52 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:42.844 12:27:52 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78094 00:25:42.844 killing process with pid 78094 00:25:42.844 12:27:52 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:42.844 12:27:52 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:42.844 12:27:52 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78094' 00:25:42.844 12:27:52 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 78094 00:25:42.844 12:27:52 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 78094 00:25:44.742 [2024-07-10 12:27:53.782297] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:25:44.742 [2024-07-10 12:27:53.821771] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:44.742 [2024-07-10 12:27:53.821934] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:25:44.742 [2024-07-10 12:27:53.829861] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:44.742 [2024-07-10 12:27:53.829918] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:25:44.742 [2024-07-10 12:27:53.829927] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:25:44.742 [2024-07-10 12:27:53.829956] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:25:44.742 [2024-07-10 12:27:53.830121] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:25:46.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.137 12:27:55 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=78164 00:25:46.137 12:27:55 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 78164 00:25:46.137 12:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 78164 ']' 00:25:46.137 12:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.137 12:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:46.137 12:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.137 12:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:46.137 12:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:46.137 12:27:55 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:25:46.137 12:27:55 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:25:46.137 "subsystems": [ 00:25:46.137 { 00:25:46.137 "subsystem": "keyring", 00:25:46.137 "config": [] 00:25:46.137 }, 00:25:46.137 { 00:25:46.137 "subsystem": "iobuf", 00:25:46.137 "config": [ 00:25:46.137 { 00:25:46.137 "method": "iobuf_set_options", 00:25:46.137 "params": { 00:25:46.137 "small_pool_count": 8192, 00:25:46.137 "large_pool_count": 1024, 00:25:46.137 "small_bufsize": 8192, 00:25:46.137 "large_bufsize": 135168 00:25:46.137 } 00:25:46.137 } 00:25:46.137 ] 00:25:46.137 }, 00:25:46.137 { 00:25:46.137 "subsystem": "sock", 00:25:46.137 "config": [ 00:25:46.137 { 00:25:46.137 "method": "sock_set_default_impl", 00:25:46.137 "params": { 00:25:46.137 "impl_name": "posix" 00:25:46.137 } 00:25:46.137 }, 00:25:46.137 { 00:25:46.137 "method": "sock_impl_set_options", 00:25:46.137 "params": { 00:25:46.137 "impl_name": "ssl", 00:25:46.137 "recv_buf_size": 4096, 00:25:46.137 "send_buf_size": 4096, 00:25:46.137 "enable_recv_pipe": true, 00:25:46.137 "enable_quickack": false, 00:25:46.137 "enable_placement_id": 0, 00:25:46.137 "enable_zerocopy_send_server": true, 00:25:46.137 "enable_zerocopy_send_client": false, 00:25:46.137 "zerocopy_threshold": 0, 00:25:46.137 "tls_version": 0, 00:25:46.137 "enable_ktls": false 00:25:46.137 } 00:25:46.137 }, 00:25:46.137 { 00:25:46.137 "method": "sock_impl_set_options", 00:25:46.137 "params": { 00:25:46.137 "impl_name": "posix", 00:25:46.138 "recv_buf_size": 2097152, 00:25:46.138 "send_buf_size": 2097152, 00:25:46.138 "enable_recv_pipe": true, 00:25:46.138 "enable_quickack": false, 00:25:46.138 "enable_placement_id": 0, 00:25:46.138 "enable_zerocopy_send_server": true, 00:25:46.138 "enable_zerocopy_send_client": false, 00:25:46.138 "zerocopy_threshold": 0, 00:25:46.138 "tls_version": 0, 00:25:46.138 "enable_ktls": false 00:25:46.138 } 00:25:46.138 } 00:25:46.138 ] 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "subsystem": "vmd", 00:25:46.138 "config": [] 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "subsystem": "accel", 00:25:46.138 "config": [ 00:25:46.138 { 00:25:46.138 "method": "accel_set_options", 00:25:46.138 "params": { 00:25:46.138 "small_cache_size": 128, 00:25:46.138 "large_cache_size": 16, 00:25:46.138 "task_count": 2048, 00:25:46.138 "sequence_count": 2048, 00:25:46.138 "buf_count": 2048 00:25:46.138 } 00:25:46.138 } 00:25:46.138 ] 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "subsystem": "bdev", 00:25:46.138 "config": [ 00:25:46.138 { 00:25:46.138 "method": "bdev_set_options", 00:25:46.138 "params": { 00:25:46.138 "bdev_io_pool_size": 65535, 00:25:46.138 "bdev_io_cache_size": 256, 00:25:46.138 "bdev_auto_examine": true, 00:25:46.138 "iobuf_small_cache_size": 128, 00:25:46.138 "iobuf_large_cache_size": 16 00:25:46.138 } 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "method": "bdev_raid_set_options", 00:25:46.138 "params": { 00:25:46.138 "process_window_size_kb": 1024 00:25:46.138 } 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "method": "bdev_iscsi_set_options", 00:25:46.138 "params": { 00:25:46.138 "timeout_sec": 30 00:25:46.138 } 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "method": "bdev_nvme_set_options", 00:25:46.138 "params": { 00:25:46.138 "action_on_timeout": "none", 00:25:46.138 "timeout_us": 0, 00:25:46.138 "timeout_admin_us": 0, 00:25:46.138 "keep_alive_timeout_ms": 10000, 00:25:46.138 "arbitration_burst": 0, 00:25:46.138 "low_priority_weight": 0, 00:25:46.138 "medium_priority_weight": 0, 00:25:46.138 "high_priority_weight": 0, 00:25:46.138 "nvme_adminq_poll_period_us": 10000, 00:25:46.138 "nvme_ioq_poll_period_us": 0, 00:25:46.138 "io_queue_requests": 0, 00:25:46.138 "delay_cmd_submit": true, 00:25:46.138 "transport_retry_count": 4, 00:25:46.138 "bdev_retry_count": 3, 00:25:46.138 "transport_ack_timeout": 0, 00:25:46.138 "ctrlr_loss_timeout_sec": 0, 00:25:46.138 "reconnect_delay_sec": 0, 00:25:46.138 "fast_io_fail_timeout_sec": 0, 00:25:46.138 "disable_auto_failback": false, 00:25:46.138 "generate_uuids": false, 00:25:46.138 "transport_tos": 0, 00:25:46.138 "nvme_error_stat": false, 00:25:46.138 "rdma_srq_size": 0, 00:25:46.138 "io_path_stat": false, 00:25:46.138 "allow_accel_sequence": false, 00:25:46.138 "rdma_max_cq_size": 0, 00:25:46.138 "rdma_cm_event_timeout_ms": 0, 00:25:46.138 "dhchap_digests": [ 00:25:46.138 "sha256", 00:25:46.138 "sha384", 00:25:46.138 "sha512" 00:25:46.138 ], 00:25:46.138 "dhchap_dhgroups": [ 00:25:46.138 "null", 00:25:46.138 "ffdhe2048", 00:25:46.138 "ffdhe3072", 00:25:46.138 "ffdhe4096", 00:25:46.138 "ffdhe6144", 00:25:46.138 "ffdhe8192" 00:25:46.138 ] 00:25:46.138 } 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "method": "bdev_nvme_set_hotplug", 00:25:46.138 "params": { 00:25:46.138 "period_us": 100000, 00:25:46.138 "enable": false 00:25:46.138 } 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "method": "bdev_malloc_create", 00:25:46.138 "params": { 00:25:46.138 "name": "malloc0", 00:25:46.138 "num_blocks": 8192, 00:25:46.138 "block_size": 4096, 00:25:46.138 "physical_block_size": 4096, 00:25:46.138 "uuid": "ed8b64a0-c1c3-4e1a-95c0-9ebfe7d8ef76", 00:25:46.138 "optimal_io_boundary": 0 00:25:46.138 } 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "method": "bdev_wait_for_examine" 00:25:46.138 } 00:25:46.138 ] 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "subsystem": "scsi", 00:25:46.138 "config": null 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "subsystem": "scheduler", 00:25:46.138 "config": [ 00:25:46.138 { 00:25:46.138 "method": "framework_set_scheduler", 00:25:46.138 "params": { 00:25:46.138 "name": "static" 00:25:46.138 } 00:25:46.138 } 00:25:46.138 ] 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "subsystem": "vhost_scsi", 00:25:46.138 "config": [] 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "subsystem": "vhost_blk", 00:25:46.138 "config": [] 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "subsystem": "ublk", 00:25:46.138 "config": [ 00:25:46.138 { 00:25:46.138 "method": "ublk_create_target", 00:25:46.138 "params": { 00:25:46.138 "cpumask": "1" 00:25:46.138 } 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "method": "ublk_start_disk", 00:25:46.138 "params": { 00:25:46.138 "bdev_name": "malloc0", 00:25:46.138 "ublk_id": 0, 00:25:46.138 "num_queues": 1, 00:25:46.138 "queue_depth": 128 00:25:46.138 } 00:25:46.138 } 00:25:46.138 ] 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "subsystem": "nbd", 00:25:46.138 "config": [] 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "subsystem": "nvmf", 00:25:46.138 "config": [ 00:25:46.138 { 00:25:46.138 "method": "nvmf_set_config", 00:25:46.138 "params": { 00:25:46.138 "discovery_filter": "match_any", 00:25:46.138 "admin_cmd_passthru": { 00:25:46.138 "identify_ctrlr": false 00:25:46.138 } 00:25:46.138 } 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "method": "nvmf_set_max_subsystems", 00:25:46.138 "params": { 00:25:46.138 "max_subsystems": 1024 00:25:46.138 } 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "method": "nvmf_set_crdt", 00:25:46.138 "params": { 00:25:46.138 "crdt1": 0, 00:25:46.138 "crdt2": 0, 00:25:46.138 "crdt3": 0 00:25:46.138 } 00:25:46.138 } 00:25:46.138 ] 00:25:46.138 }, 00:25:46.138 { 00:25:46.138 "subsystem": "iscsi", 00:25:46.138 "config": [ 00:25:46.138 { 00:25:46.138 "method": "iscsi_set_options", 00:25:46.138 "params": { 00:25:46.138 "node_base": "iqn.2016-06.io.spdk", 00:25:46.138 "max_sessions": 128, 00:25:46.138 "max_connections_per_session": 2, 00:25:46.138 "max_queue_depth": 64, 00:25:46.138 "default_time2wait": 2, 00:25:46.138 "default_time2retain": 20, 00:25:46.138 "first_burst_length": 8192, 00:25:46.138 "immediate_data": true, 00:25:46.138 "allow_duplicated_isid": false, 00:25:46.138 "error_recovery_level": 0, 00:25:46.138 "nop_timeout": 60, 00:25:46.138 "nop_in_interval": 30, 00:25:46.138 "disable_chap": false, 00:25:46.138 "require_chap": false, 00:25:46.138 "mutual_chap": false, 00:25:46.138 "chap_group": 0, 00:25:46.138 "max_large_datain_per_connection": 64, 00:25:46.138 "max_r2t_per_connection": 4, 00:25:46.138 "pdu_pool_size": 36864, 00:25:46.138 "immediate_data_pool_size": 16384, 00:25:46.138 "data_out_pool_size": 2048 00:25:46.138 } 00:25:46.138 } 00:25:46.139 ] 00:25:46.139 } 00:25:46.139 ] 00:25:46.139 }' 00:25:46.139 [2024-07-10 12:27:55.564661] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:25:46.139 [2024-07-10 12:27:55.564823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78164 ] 00:25:46.397 [2024-07-10 12:27:55.734510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.655 [2024-07-10 12:27:56.013383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.028 [2024-07-10 12:27:57.188766] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:25:48.028 [2024-07-10 12:27:57.190087] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:25:48.028 [2024-07-10 12:27:57.196866] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:25:48.028 [2024-07-10 12:27:57.196997] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:25:48.028 [2024-07-10 12:27:57.197010] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:25:48.028 [2024-07-10 12:27:57.197019] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:25:48.028 [2024-07-10 12:27:57.204917] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:48.028 [2024-07-10 12:27:57.204939] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:48.028 [2024-07-10 12:27:57.212767] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:48.028 [2024-07-10 12:27:57.212874] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:25:48.028 [2024-07-10 12:27:57.228767] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 78164 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 78164 ']' 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 78164 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78164 00:25:48.028 killing process with pid 78164 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78164' 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 78164 00:25:48.028 12:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 78164 00:25:49.928 [2024-07-10 12:27:59.016698] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:25:49.928 [2024-07-10 12:27:59.055851] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:49.928 [2024-07-10 12:27:59.056018] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:25:49.928 [2024-07-10 12:27:59.065762] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:49.928 [2024-07-10 12:27:59.065815] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:25:49.928 [2024-07-10 12:27:59.065824] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:25:49.928 [2024-07-10 12:27:59.065852] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:25:49.928 [2024-07-10 12:27:59.066024] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:25:51.308 12:28:00 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:25:51.308 00:25:51.308 real 0m10.543s 00:25:51.308 user 0m8.722s 00:25:51.308 sys 0m2.445s 00:25:51.308 ************************************ 00:25:51.308 END TEST test_save_ublk_config 00:25:51.308 ************************************ 00:25:51.308 12:28:00 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:51.308 12:28:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:51.308 12:28:00 ublk -- common/autotest_common.sh@1142 -- # return 0 00:25:51.308 12:28:00 ublk -- ublk/ublk.sh@139 -- # spdk_pid=78251 00:25:51.308 12:28:00 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:25:51.308 12:28:00 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:51.308 12:28:00 ublk -- ublk/ublk.sh@141 -- # waitforlisten 78251 00:25:51.308 12:28:00 ublk -- common/autotest_common.sh@829 -- # '[' -z 78251 ']' 00:25:51.308 12:28:00 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.308 12:28:00 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:51.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.308 12:28:00 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.308 12:28:00 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:51.308 12:28:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:25:51.565 [2024-07-10 12:28:00.843536] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:25:51.565 [2024-07-10 12:28:00.843664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78251 ] 00:25:51.565 [2024-07-10 12:28:01.014471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:51.824 [2024-07-10 12:28:01.294018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.824 [2024-07-10 12:28:01.294018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.199 12:28:02 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:53.199 12:28:02 ublk -- common/autotest_common.sh@862 -- # return 0 00:25:53.199 12:28:02 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:25:53.199 12:28:02 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:53.199 12:28:02 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:53.199 12:28:02 ublk -- common/autotest_common.sh@10 -- # set +x 00:25:53.199 ************************************ 00:25:53.199 START TEST test_create_ublk 00:25:53.199 ************************************ 00:25:53.199 12:28:02 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:25:53.199 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:25:53.199 12:28:02 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.199 12:28:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:53.199 [2024-07-10 12:28:02.332751] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:25:53.199 [2024-07-10 12:28:02.336331] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:25:53.199 12:28:02 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.199 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:25:53.199 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:25:53.199 12:28:02 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.199 12:28:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:53.199 12:28:02 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.199 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:25:53.199 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:25:53.199 12:28:02 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.199 12:28:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:53.199 [2024-07-10 12:28:02.673899] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:25:53.199 [2024-07-10 12:28:02.674334] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:25:53.199 [2024-07-10 12:28:02.674355] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:25:53.199 [2024-07-10 12:28:02.674367] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:25:53.457 [2024-07-10 12:28:02.681778] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:53.457 [2024-07-10 12:28:02.681807] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:53.457 [2024-07-10 12:28:02.689766] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:53.457 [2024-07-10 12:28:02.698944] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:25:53.457 [2024-07-10 12:28:02.725757] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:25:53.457 12:28:02 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.457 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:25:53.457 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:25:53.457 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:25:53.457 12:28:02 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.457 12:28:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:53.457 12:28:02 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.457 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:25:53.457 { 00:25:53.457 "ublk_device": "/dev/ublkb0", 00:25:53.457 "id": 0, 00:25:53.457 "queue_depth": 512, 00:25:53.457 "num_queues": 4, 00:25:53.457 "bdev_name": "Malloc0" 00:25:53.457 } 00:25:53.457 ]' 00:25:53.457 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:25:53.457 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:25:53.457 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:25:53.457 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:25:53.457 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:25:53.457 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:25:53.457 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:25:53.717 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:25:53.717 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:25:53.717 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:25:53.717 12:28:02 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:25:53.717 12:28:02 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:25:53.717 12:28:02 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:25:53.717 12:28:02 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:25:53.717 12:28:02 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:25:53.717 12:28:02 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:25:53.717 12:28:02 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:25:53.717 12:28:02 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:25:53.717 12:28:02 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:25:53.717 12:28:02 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:25:53.717 12:28:02 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:25:53.717 12:28:02 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:25:53.717 fio: verification read phase will never start because write phase uses all of runtime 00:25:53.717 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:25:53.717 fio-3.35 00:25:53.717 Starting 1 process 00:26:05.929 00:26:05.929 fio_test: (groupid=0, jobs=1): err= 0: pid=78310: Wed Jul 10 12:28:13 2024 00:26:05.929 write: IOPS=16.5k, BW=64.4MiB/s (67.5MB/s)(644MiB/10001msec); 0 zone resets 00:26:05.929 clat (usec): min=37, max=3951, avg=59.87, stdev=97.47 00:26:05.929 lat (usec): min=38, max=3951, avg=60.32, stdev=97.47 00:26:05.929 clat percentiles (usec): 00:26:05.929 | 1.00th=[ 40], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 54], 00:26:05.929 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 56], 60.00th=[ 57], 00:26:05.929 | 70.00th=[ 58], 80.00th=[ 59], 90.00th=[ 61], 95.00th=[ 63], 00:26:05.929 | 99.00th=[ 71], 99.50th=[ 80], 99.90th=[ 2008], 99.95th=[ 2802], 00:26:05.929 | 99.99th=[ 3490] 00:26:05.929 bw ( KiB/s): min=63824, max=74800, per=100.00%, avg=65991.16, stdev=2194.78, samples=19 00:26:05.929 iops : min=15956, max=18700, avg=16497.79, stdev=548.70, samples=19 00:26:05.929 lat (usec) : 50=3.23%, 100=96.41%, 250=0.15%, 500=0.02%, 750=0.02% 00:26:05.929 lat (usec) : 1000=0.01% 00:26:05.929 lat (msec) : 2=0.06%, 4=0.10% 00:26:05.929 cpu : usr=2.90%, sys=9.70%, ctx=164822, majf=0, minf=796 00:26:05.929 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.929 issued rwts: total=0,164829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.929 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:05.929 00:26:05.929 Run status group 0 (all jobs): 00:26:05.929 WRITE: bw=64.4MiB/s (67.5MB/s), 64.4MiB/s-64.4MiB/s (67.5MB/s-67.5MB/s), io=644MiB (675MB), run=10001-10001msec 00:26:05.929 00:26:05.929 Disk stats (read/write): 00:26:05.929 ublkb0: ios=0/163126, merge=0/0, ticks=0/8693, in_queue=8694, util=99.04% 00:26:05.929 12:28:13 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.929 [2024-07-10 12:28:13.251932] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:26:05.929 [2024-07-10 12:28:13.289808] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:05.929 [2024-07-10 12:28:13.290987] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:26:05.929 [2024-07-10 12:28:13.297772] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:05.929 [2024-07-10 12:28:13.298046] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:26:05.929 [2024-07-10 12:28:13.298060] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.929 12:28:13 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.929 [2024-07-10 12:28:13.320866] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:26:05.929 request: 00:26:05.929 { 00:26:05.929 "ublk_id": 0, 00:26:05.929 "method": "ublk_stop_disk", 00:26:05.929 "req_id": 1 00:26:05.929 } 00:26:05.929 Got JSON-RPC error response 00:26:05.929 response: 00:26:05.929 { 00:26:05.929 "code": -19, 00:26:05.929 "message": "No such device" 00:26:05.929 } 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:05.929 12:28:13 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.929 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.929 [2024-07-10 12:28:13.336847] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:26:05.929 [2024-07-10 12:28:13.344770] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:26:05.930 [2024-07-10 12:28:13.344807] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:26:05.930 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.930 12:28:13 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:05.930 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.930 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.930 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.930 12:28:13 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:26:05.930 12:28:13 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:26:05.930 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.930 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.930 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.930 12:28:13 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:26:05.930 12:28:13 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:26:05.930 12:28:13 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:26:05.930 12:28:13 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:26:05.930 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.930 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.930 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.930 12:28:13 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:26:05.930 12:28:13 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:26:05.930 ************************************ 00:26:05.930 END TEST test_create_ublk 00:26:05.930 ************************************ 00:26:05.930 12:28:13 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:26:05.930 00:26:05.930 real 0m11.511s 00:26:05.930 user 0m0.679s 00:26:05.930 sys 0m1.105s 00:26:05.930 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:05.930 12:28:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.930 12:28:13 ublk -- common/autotest_common.sh@1142 -- # return 0 00:26:05.930 12:28:13 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:26:05.930 12:28:13 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:05.930 12:28:13 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.930 12:28:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.930 ************************************ 00:26:05.930 START TEST test_create_multi_ublk 00:26:05.930 ************************************ 00:26:05.930 12:28:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:26:05.930 12:28:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:26:05.930 12:28:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.930 12:28:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.930 [2024-07-10 12:28:13.917759] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:26:05.930 [2024-07-10 12:28:13.920820] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:26:05.930 12:28:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.930 12:28:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:26:05.930 12:28:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:26:05.930 12:28:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:05.930 12:28:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:26:05.930 12:28:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.930 12:28:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.930 [2024-07-10 12:28:14.253911] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:26:05.930 [2024-07-10 12:28:14.254432] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:26:05.930 [2024-07-10 12:28:14.254452] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:26:05.930 [2024-07-10 12:28:14.254461] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:26:05.930 [2024-07-10 12:28:14.261791] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:05.930 [2024-07-10 12:28:14.261811] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:05.930 [2024-07-10 12:28:14.269762] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:05.930 [2024-07-10 12:28:14.270382] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:26:05.930 [2024-07-10 12:28:14.279829] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.930 [2024-07-10 12:28:14.621906] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:26:05.930 [2024-07-10 12:28:14.622340] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:26:05.930 [2024-07-10 12:28:14.622358] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:26:05.930 [2024-07-10 12:28:14.622369] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:26:05.930 [2024-07-10 12:28:14.630216] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:05.930 [2024-07-10 12:28:14.630241] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:05.930 [2024-07-10 12:28:14.637775] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:05.930 [2024-07-10 12:28:14.638422] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:26:05.930 [2024-07-10 12:28:14.646806] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.930 12:28:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.930 [2024-07-10 12:28:14.995919] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:26:05.930 [2024-07-10 12:28:14.996353] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:26:05.930 [2024-07-10 12:28:14.996378] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:26:05.930 [2024-07-10 12:28:14.996387] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:26:05.930 [2024-07-10 12:28:15.003789] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:05.930 [2024-07-10 12:28:15.003811] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:05.930 [2024-07-10 12:28:15.011766] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:05.930 [2024-07-10 12:28:15.012420] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:26:05.930 [2024-07-10 12:28:15.018082] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.930 [2024-07-10 12:28:15.348895] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:26:05.930 [2024-07-10 12:28:15.349330] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:26:05.930 [2024-07-10 12:28:15.349347] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:26:05.930 [2024-07-10 12:28:15.349358] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:26:05.930 [2024-07-10 12:28:15.360777] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:05.930 [2024-07-10 12:28:15.360806] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:05.930 [2024-07-10 12:28:15.372772] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:05.930 [2024-07-10 12:28:15.373442] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:26:05.930 [2024-07-10 12:28:15.393763] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.930 12:28:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:26:06.188 { 00:26:06.188 "ublk_device": "/dev/ublkb0", 00:26:06.188 "id": 0, 00:26:06.188 "queue_depth": 512, 00:26:06.188 "num_queues": 4, 00:26:06.188 "bdev_name": "Malloc0" 00:26:06.188 }, 00:26:06.188 { 00:26:06.188 "ublk_device": "/dev/ublkb1", 00:26:06.188 "id": 1, 00:26:06.188 "queue_depth": 512, 00:26:06.188 "num_queues": 4, 00:26:06.188 "bdev_name": "Malloc1" 00:26:06.188 }, 00:26:06.188 { 00:26:06.188 "ublk_device": "/dev/ublkb2", 00:26:06.188 "id": 2, 00:26:06.188 "queue_depth": 512, 00:26:06.188 "num_queues": 4, 00:26:06.188 "bdev_name": "Malloc2" 00:26:06.188 }, 00:26:06.188 { 00:26:06.188 "ublk_device": "/dev/ublkb3", 00:26:06.188 "id": 3, 00:26:06.188 "queue_depth": 512, 00:26:06.188 "num_queues": 4, 00:26:06.188 "bdev_name": "Malloc3" 00:26:06.188 } 00:26:06.188 ]' 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:06.188 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:26:06.446 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:26:06.704 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:26:06.704 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:26:06.704 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:26:06.704 12:28:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:26:06.704 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:26:06.704 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:06.704 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:26:06.704 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:26:06.704 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:26:06.704 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:26:06.704 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:26:06.704 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:26:06.704 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:06.962 [2024-07-10 12:28:16.272931] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:26:06.962 [2024-07-10 12:28:16.316251] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:06.962 [2024-07-10 12:28:16.321158] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:26:06.962 [2024-07-10 12:28:16.331941] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:06.962 [2024-07-10 12:28:16.332272] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:26:06.962 [2024-07-10 12:28:16.332291] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:06.962 [2024-07-10 12:28:16.339859] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:26:06.962 [2024-07-10 12:28:16.372255] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:06.962 [2024-07-10 12:28:16.377173] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:26:06.962 [2024-07-10 12:28:16.384875] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:06.962 [2024-07-10 12:28:16.385161] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:26:06.962 [2024-07-10 12:28:16.385177] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.962 12:28:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:06.962 [2024-07-10 12:28:16.402868] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:26:06.962 [2024-07-10 12:28:16.436324] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:06.962 [2024-07-10 12:28:16.437678] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:26:07.219 [2024-07-10 12:28:16.441754] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:07.219 [2024-07-10 12:28:16.442044] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:26:07.219 [2024-07-10 12:28:16.442062] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:26:07.219 12:28:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.219 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:07.219 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:26:07.219 12:28:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.219 12:28:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:07.219 [2024-07-10 12:28:16.448870] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:26:07.219 [2024-07-10 12:28:16.489300] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:07.219 [2024-07-10 12:28:16.490228] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:26:07.219 [2024-07-10 12:28:16.495764] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:07.219 [2024-07-10 12:28:16.496073] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:26:07.219 [2024-07-10 12:28:16.496089] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:26:07.219 12:28:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.219 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:26:07.219 [2024-07-10 12:28:16.672867] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:26:07.219 [2024-07-10 12:28:16.679859] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:26:07.219 [2024-07-10 12:28:16.679910] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:26:07.475 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:26:07.475 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:07.475 12:28:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:07.475 12:28:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.475 12:28:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:07.731 12:28:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.731 12:28:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:07.731 12:28:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:07.731 12:28:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.731 12:28:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:07.991 12:28:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.992 12:28:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:07.992 12:28:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:26:07.992 12:28:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.992 12:28:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:08.559 12:28:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.559 12:28:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:08.559 12:28:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:26:08.559 12:28:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.559 12:28:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:26:08.816 12:28:18 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:26:09.074 ************************************ 00:26:09.074 END TEST test_create_multi_ublk 00:26:09.074 ************************************ 00:26:09.074 12:28:18 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:26:09.074 00:26:09.074 real 0m4.410s 00:26:09.074 user 0m0.989s 00:26:09.074 sys 0m0.232s 00:26:09.074 12:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:09.074 12:28:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:09.074 12:28:18 ublk -- common/autotest_common.sh@1142 -- # return 0 00:26:09.074 12:28:18 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:09.074 12:28:18 ublk -- ublk/ublk.sh@147 -- # cleanup 00:26:09.074 12:28:18 ublk -- ublk/ublk.sh@130 -- # killprocess 78251 00:26:09.074 12:28:18 ublk -- common/autotest_common.sh@948 -- # '[' -z 78251 ']' 00:26:09.074 12:28:18 ublk -- common/autotest_common.sh@952 -- # kill -0 78251 00:26:09.074 12:28:18 ublk -- common/autotest_common.sh@953 -- # uname 00:26:09.074 12:28:18 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:09.074 12:28:18 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78251 00:26:09.074 killing process with pid 78251 00:26:09.074 12:28:18 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:09.074 12:28:18 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:09.074 12:28:18 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78251' 00:26:09.074 12:28:18 ublk -- common/autotest_common.sh@967 -- # kill 78251 00:26:09.074 12:28:18 ublk -- common/autotest_common.sh@972 -- # wait 78251 00:26:10.447 [2024-07-10 12:28:19.689345] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:26:10.447 [2024-07-10 12:28:19.689414] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:26:11.819 ************************************ 00:26:11.819 END TEST ublk 00:26:11.819 ************************************ 00:26:11.819 00:26:11.819 real 0m31.189s 00:26:11.819 user 0m45.492s 00:26:11.819 sys 0m8.933s 00:26:11.819 12:28:21 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:11.819 12:28:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:26:11.819 12:28:21 -- common/autotest_common.sh@1142 -- # return 0 00:26:11.819 12:28:21 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:26:11.819 12:28:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:11.819 12:28:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:11.819 12:28:21 -- common/autotest_common.sh@10 -- # set +x 00:26:11.819 ************************************ 00:26:11.819 START TEST ublk_recovery 00:26:11.819 ************************************ 00:26:11.819 12:28:21 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:26:12.076 * Looking for test storage... 00:26:12.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:26:12.076 12:28:21 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:26:12.076 12:28:21 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:26:12.076 12:28:21 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:26:12.076 12:28:21 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:26:12.076 12:28:21 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:26:12.076 12:28:21 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:26:12.076 12:28:21 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:26:12.076 12:28:21 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:26:12.076 12:28:21 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:26:12.076 12:28:21 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:26:12.076 12:28:21 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=78654 00:26:12.076 12:28:21 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:26:12.076 12:28:21 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:12.076 12:28:21 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 78654 00:26:12.076 12:28:21 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 78654 ']' 00:26:12.076 12:28:21 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.076 12:28:21 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:12.076 12:28:21 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.076 12:28:21 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:12.077 12:28:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.077 [2024-07-10 12:28:21.473770] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:26:12.077 [2024-07-10 12:28:21.473900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78654 ] 00:26:12.334 [2024-07-10 12:28:21.647481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:12.592 [2024-07-10 12:28:21.926738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.592 [2024-07-10 12:28:21.926794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.525 12:28:22 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:13.525 12:28:22 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:26:13.525 12:28:22 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:26:13.525 12:28:22 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.525 12:28:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.525 [2024-07-10 12:28:22.986758] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:26:13.525 [2024-07-10 12:28:22.989817] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:26:13.525 12:28:22 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.525 12:28:22 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:26:13.525 12:28:22 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.525 12:28:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.782 malloc0 00:26:13.782 12:28:23 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.782 12:28:23 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:26:13.782 12:28:23 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.782 12:28:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.782 [2024-07-10 12:28:23.164911] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:26:13.782 [2024-07-10 12:28:23.165035] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:26:13.782 [2024-07-10 12:28:23.165049] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:26:13.782 [2024-07-10 12:28:23.165060] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:26:13.782 [2024-07-10 12:28:23.172772] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:13.783 [2024-07-10 12:28:23.172803] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:13.783 [2024-07-10 12:28:23.180764] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:13.783 [2024-07-10 12:28:23.180932] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:26:13.783 [2024-07-10 12:28:23.191771] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:26:13.783 1 00:26:13.783 12:28:23 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.783 12:28:23 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:26:15.154 12:28:24 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=78700 00:26:15.154 12:28:24 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:26:15.154 12:28:24 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:26:15.154 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:15.154 fio-3.35 00:26:15.154 Starting 1 process 00:26:20.410 12:28:29 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 78654 00:26:20.410 12:28:29 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:26:25.693 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 78654 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:26:25.693 12:28:34 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=78806 00:26:25.693 12:28:34 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:26:25.693 12:28:34 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:25.693 12:28:34 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 78806 00:26:25.693 12:28:34 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 78806 ']' 00:26:25.693 12:28:34 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.693 12:28:34 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:25.693 12:28:34 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.693 12:28:34 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:25.693 12:28:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.693 [2024-07-10 12:28:34.323027] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:26:25.693 [2024-07-10 12:28:34.323187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78806 ] 00:26:25.693 [2024-07-10 12:28:34.496005] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:25.693 [2024-07-10 12:28:34.774180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.693 [2024-07-10 12:28:34.774211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.627 12:28:35 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:26.627 12:28:35 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:26:26.627 12:28:35 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:26:26.627 12:28:35 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.627 12:28:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.627 [2024-07-10 12:28:35.804754] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:26:26.627 [2024-07-10 12:28:35.807788] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:26:26.627 12:28:35 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.627 12:28:35 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:26:26.627 12:28:35 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.627 12:28:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.627 malloc0 00:26:26.627 12:28:35 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.627 12:28:35 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:26:26.627 12:28:35 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.627 12:28:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.627 [2024-07-10 12:28:35.980926] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:26:26.627 [2024-07-10 12:28:35.980987] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:26:26.627 [2024-07-10 12:28:35.980997] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:26:26.627 [2024-07-10 12:28:35.988813] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:26:26.627 [2024-07-10 12:28:35.988840] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:26:26.627 [2024-07-10 12:28:35.988943] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:26:26.627 1 00:26:26.627 12:28:35 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.627 12:28:35 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 78700 00:26:26.627 [2024-07-10 12:28:35.996786] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:26:26.627 [2024-07-10 12:28:36.000451] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:26:26.627 [2024-07-10 12:28:36.004014] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:26:26.627 [2024-07-10 12:28:36.004042] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:27:22.915 00:27:22.915 fio_test: (groupid=0, jobs=1): err= 0: pid=78703: Wed Jul 10 12:29:24 2024 00:27:22.915 read: IOPS=22.0k, BW=85.9MiB/s (90.1MB/s)(5157MiB/60004msec) 00:27:22.915 slat (usec): min=2, max=521, avg= 7.53, stdev= 2.27 00:27:22.915 clat (usec): min=979, max=6803.9k, avg=2897.26, stdev=49870.71 00:27:22.915 lat (usec): min=986, max=6803.9k, avg=2904.79, stdev=49870.72 00:27:22.915 clat percentiles (usec): 00:27:22.915 | 1.00th=[ 1991], 5.00th=[ 2147], 10.00th=[ 2212], 20.00th=[ 2245], 00:27:22.915 | 30.00th=[ 2311], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2442], 00:27:22.915 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2966], 95.00th=[ 3752], 00:27:22.915 | 99.00th=[ 4948], 99.50th=[ 5538], 99.90th=[ 6980], 99.95th=[ 7504], 00:27:22.915 | 99.99th=[12780] 00:27:22.915 bw ( KiB/s): min=43776, max=106752, per=100.00%, avg=98788.51, stdev=9844.81, samples=106 00:27:22.915 iops : min=10944, max=26690, avg=24697.15, stdev=2461.21, samples=106 00:27:22.915 write: IOPS=22.0k, BW=85.8MiB/s (90.0MB/s)(5151MiB/60004msec); 0 zone resets 00:27:22.915 slat (usec): min=2, max=804, avg= 7.63, stdev= 2.36 00:27:22.915 clat (usec): min=1049, max=6803.9k, avg=2907.74, stdev=44710.93 00:27:22.915 lat (usec): min=1056, max=6803.9k, avg=2915.37, stdev=44710.94 00:27:22.915 clat percentiles (usec): 00:27:22.915 | 1.00th=[ 1975], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2343], 00:27:22.915 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2573], 00:27:22.915 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 2933], 95.00th=[ 3752], 00:27:22.915 | 99.00th=[ 4948], 99.50th=[ 5604], 99.90th=[ 7111], 99.95th=[ 7701], 00:27:22.915 | 99.99th=[ 9634] 00:27:22.916 bw ( KiB/s): min=44336, max=106864, per=100.00%, avg=98669.52, stdev=9676.48, samples=106 00:27:22.916 iops : min=11084, max=26716, avg=24667.35, stdev=2419.10, samples=106 00:27:22.916 lat (usec) : 1000=0.01% 00:27:22.916 lat (msec) : 2=1.26%, 4=94.88%, 10=3.85%, 20=0.01%, >=2000=0.01% 00:27:22.916 cpu : usr=11.32%, sys=32.74%, ctx=112192, majf=0, minf=14 00:27:22.916 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:27:22.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:22.916 issued rwts: total=1320095,1318609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.916 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:22.916 00:27:22.916 Run status group 0 (all jobs): 00:27:22.916 READ: bw=85.9MiB/s (90.1MB/s), 85.9MiB/s-85.9MiB/s (90.1MB/s-90.1MB/s), io=5157MiB (5407MB), run=60004-60004msec 00:27:22.916 WRITE: bw=85.8MiB/s (90.0MB/s), 85.8MiB/s-85.8MiB/s (90.0MB/s-90.0MB/s), io=5151MiB (5401MB), run=60004-60004msec 00:27:22.916 00:27:22.916 Disk stats (read/write): 00:27:22.916 ublkb1: ios=1317112/1315655, merge=0/0, ticks=3712335/3588541, in_queue=7300876, util=99.95% 00:27:22.916 12:29:24 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.916 [2024-07-10 12:29:24.480267] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:27:22.916 [2024-07-10 12:29:24.523812] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:27:22.916 [2024-07-10 12:29:24.524208] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:27:22.916 [2024-07-10 12:29:24.531770] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:27:22.916 [2024-07-10 12:29:24.531929] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:27:22.916 [2024-07-10 12:29:24.531944] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.916 12:29:24 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.916 [2024-07-10 12:29:24.539861] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:27:22.916 [2024-07-10 12:29:24.547065] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:27:22.916 [2024-07-10 12:29:24.547105] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.916 12:29:24 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:27:22.916 12:29:24 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:27:22.916 12:29:24 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 78806 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 78806 ']' 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 78806 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78806 00:27:22.916 killing process with pid 78806 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78806' 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@967 -- # kill 78806 00:27:22.916 12:29:24 ublk_recovery -- common/autotest_common.sh@972 -- # wait 78806 00:27:22.916 [2024-07-10 12:29:25.853770] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:27:22.916 [2024-07-10 12:29:25.853835] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:27:22.916 ************************************ 00:27:22.916 END TEST ublk_recovery 00:27:22.916 ************************************ 00:27:22.916 00:27:22.916 real 1m6.273s 00:27:22.916 user 1m48.696s 00:27:22.916 sys 0m38.810s 00:27:22.916 12:29:27 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:22.916 12:29:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.916 12:29:27 -- common/autotest_common.sh@1142 -- # return 0 00:27:22.916 12:29:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:27:22.916 12:29:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:27:22.916 12:29:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:22.916 12:29:27 -- common/autotest_common.sh@10 -- # set +x 00:27:22.916 12:29:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:27:22.916 12:29:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:27:22.916 12:29:27 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:27:22.916 12:29:27 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:22.916 12:29:27 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:22.916 12:29:27 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:22.916 12:29:27 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:22.916 12:29:27 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:22.916 12:29:27 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:22.916 12:29:27 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:27:22.916 12:29:27 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:27:22.916 12:29:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:22.916 12:29:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:22.916 12:29:27 -- common/autotest_common.sh@10 -- # set +x 00:27:22.916 ************************************ 00:27:22.916 START TEST ftl 00:27:22.916 ************************************ 00:27:22.916 12:29:27 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:27:22.916 * Looking for test storage... 00:27:22.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:22.916 12:29:27 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:22.916 12:29:27 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:27:22.916 12:29:27 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:22.916 12:29:27 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:22.916 12:29:27 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:22.916 12:29:27 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:22.916 12:29:27 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:22.916 12:29:27 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:22.916 12:29:27 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:22.916 12:29:27 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:22.916 12:29:27 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:22.916 12:29:27 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:22.916 12:29:27 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:22.916 12:29:27 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:22.916 12:29:27 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:22.916 12:29:27 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:22.916 12:29:27 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:22.916 12:29:27 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:22.916 12:29:27 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:22.916 12:29:27 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:22.916 12:29:27 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:22.916 12:29:27 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:22.916 12:29:27 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:22.916 12:29:27 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:22.916 12:29:27 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:22.916 12:29:27 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:22.916 12:29:27 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:22.916 12:29:27 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:22.916 12:29:27 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:22.916 12:29:27 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:22.916 12:29:27 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:27:22.916 12:29:27 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:27:22.916 12:29:27 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:27:22.916 12:29:27 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:27:22.916 12:29:27 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:22.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:22.916 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:22.916 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:22.916 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:22.916 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:22.916 12:29:28 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=79604 00:27:22.916 12:29:28 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:27:22.916 12:29:28 ftl -- ftl/ftl.sh@38 -- # waitforlisten 79604 00:27:22.916 12:29:28 ftl -- common/autotest_common.sh@829 -- # '[' -z 79604 ']' 00:27:22.916 12:29:28 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.916 12:29:28 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:22.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.916 12:29:28 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.916 12:29:28 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:22.916 12:29:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:22.916 [2024-07-10 12:29:28.696140] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:27:22.916 [2024-07-10 12:29:28.696275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79604 ] 00:27:22.916 [2024-07-10 12:29:28.869173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.916 [2024-07-10 12:29:29.101693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.916 12:29:29 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:22.916 12:29:29 ftl -- common/autotest_common.sh@862 -- # return 0 00:27:22.916 12:29:29 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:27:22.916 12:29:29 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:27:22.916 12:29:30 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:22.916 12:29:30 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:27:22.916 12:29:31 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@50 -- # break 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@63 -- # break 00:27:22.917 12:29:31 ftl -- ftl/ftl.sh@66 -- # killprocess 79604 00:27:22.917 12:29:31 ftl -- common/autotest_common.sh@948 -- # '[' -z 79604 ']' 00:27:22.917 12:29:31 ftl -- common/autotest_common.sh@952 -- # kill -0 79604 00:27:22.917 12:29:31 ftl -- common/autotest_common.sh@953 -- # uname 00:27:22.917 12:29:31 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:22.917 12:29:31 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79604 00:27:22.917 killing process with pid 79604 00:27:22.917 12:29:31 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:22.917 12:29:31 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:22.917 12:29:31 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79604' 00:27:22.917 12:29:31 ftl -- common/autotest_common.sh@967 -- # kill 79604 00:27:22.917 12:29:31 ftl -- common/autotest_common.sh@972 -- # wait 79604 00:27:24.819 12:29:34 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:27:24.819 12:29:34 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:27:24.819 12:29:34 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:24.819 12:29:34 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:24.819 12:29:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:24.819 ************************************ 00:27:24.819 START TEST ftl_fio_basic 00:27:24.819 ************************************ 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:27:24.819 * Looking for test storage... 00:27:24.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=79739 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 79739 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 79739 ']' 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:24.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:24.819 12:29:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:25.100 [2024-07-10 12:29:34.360615] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:27:25.100 [2024-07-10 12:29:34.360997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79739 ] 00:27:25.100 [2024-07-10 12:29:34.538348] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:25.358 [2024-07-10 12:29:34.782202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.358 [2024-07-10 12:29:34.782343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.358 [2024-07-10 12:29:34.782379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.293 12:29:35 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:26.293 12:29:35 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:27:26.293 12:29:35 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:26.293 12:29:35 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:27:26.293 12:29:35 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:26.293 12:29:35 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:27:26.293 12:29:35 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:27:26.293 12:29:35 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:26.552 12:29:35 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:26.552 12:29:35 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:27:26.552 12:29:35 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:26.552 12:29:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:27:26.552 12:29:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:26.552 12:29:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:27:26.552 12:29:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:27:26.552 12:29:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:26.810 12:29:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:26.810 { 00:27:26.810 "name": "nvme0n1", 00:27:26.810 "aliases": [ 00:27:26.810 "420ae408-1044-400d-8dd7-718882ca02a7" 00:27:26.810 ], 00:27:26.810 "product_name": "NVMe disk", 00:27:26.810 "block_size": 4096, 00:27:26.810 "num_blocks": 1310720, 00:27:26.810 "uuid": "420ae408-1044-400d-8dd7-718882ca02a7", 00:27:26.810 "assigned_rate_limits": { 00:27:26.810 "rw_ios_per_sec": 0, 00:27:26.810 "rw_mbytes_per_sec": 0, 00:27:26.810 "r_mbytes_per_sec": 0, 00:27:26.810 "w_mbytes_per_sec": 0 00:27:26.810 }, 00:27:26.810 "claimed": false, 00:27:26.810 "zoned": false, 00:27:26.810 "supported_io_types": { 00:27:26.810 "read": true, 00:27:26.810 "write": true, 00:27:26.810 "unmap": true, 00:27:26.810 "flush": true, 00:27:26.810 "reset": true, 00:27:26.810 "nvme_admin": true, 00:27:26.810 "nvme_io": true, 00:27:26.810 "nvme_io_md": false, 00:27:26.810 "write_zeroes": true, 00:27:26.810 "zcopy": false, 00:27:26.810 "get_zone_info": false, 00:27:26.810 "zone_management": false, 00:27:26.810 "zone_append": false, 00:27:26.810 "compare": true, 00:27:26.810 "compare_and_write": false, 00:27:26.810 "abort": true, 00:27:26.810 "seek_hole": false, 00:27:26.810 "seek_data": false, 00:27:26.810 "copy": true, 00:27:26.810 "nvme_iov_md": false 00:27:26.810 }, 00:27:26.810 "driver_specific": { 00:27:26.810 "nvme": [ 00:27:26.810 { 00:27:26.810 "pci_address": "0000:00:11.0", 00:27:26.810 "trid": { 00:27:26.810 "trtype": "PCIe", 00:27:26.810 "traddr": "0000:00:11.0" 00:27:26.810 }, 00:27:26.810 "ctrlr_data": { 00:27:26.810 "cntlid": 0, 00:27:26.810 "vendor_id": "0x1b36", 00:27:26.810 "model_number": "QEMU NVMe Ctrl", 00:27:26.810 "serial_number": "12341", 00:27:26.810 "firmware_revision": "8.0.0", 00:27:26.810 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:26.810 "oacs": { 00:27:26.810 "security": 0, 00:27:26.810 "format": 1, 00:27:26.810 "firmware": 0, 00:27:26.810 "ns_manage": 1 00:27:26.810 }, 00:27:26.810 "multi_ctrlr": false, 00:27:26.810 "ana_reporting": false 00:27:26.810 }, 00:27:26.810 "vs": { 00:27:26.810 "nvme_version": "1.4" 00:27:26.810 }, 00:27:26.810 "ns_data": { 00:27:26.810 "id": 1, 00:27:26.810 "can_share": false 00:27:26.810 } 00:27:26.810 } 00:27:26.810 ], 00:27:26.810 "mp_policy": "active_passive" 00:27:26.810 } 00:27:26.810 } 00:27:26.810 ]' 00:27:26.810 12:29:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:26.810 12:29:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:27:26.810 12:29:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:26.810 12:29:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:26.810 12:29:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:26.810 12:29:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:27:26.810 12:29:36 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:27:26.810 12:29:36 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:26.811 12:29:36 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:27:26.811 12:29:36 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:26.811 12:29:36 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:27.069 12:29:36 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:27:27.069 12:29:36 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:27.327 12:29:36 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=394a7b7c-6a78-42d2-9d1a-842eacbe82e5 00:27:27.327 12:29:36 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 394a7b7c-6a78-42d2-9d1a-842eacbe82e5 00:27:27.587 12:29:36 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=823c7b91-89fc-475e-bd8e-913236b303ef 00:27:27.587 12:29:36 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 823c7b91-89fc-475e-bd8e-913236b303ef 00:27:27.587 12:29:36 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:27:27.587 12:29:36 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:27.587 12:29:36 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=823c7b91-89fc-475e-bd8e-913236b303ef 00:27:27.587 12:29:36 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:27:27.587 12:29:36 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 823c7b91-89fc-475e-bd8e-913236b303ef 00:27:27.587 12:29:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=823c7b91-89fc-475e-bd8e-913236b303ef 00:27:27.587 12:29:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:27.587 12:29:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:27:27.587 12:29:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:27:27.587 12:29:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 823c7b91-89fc-475e-bd8e-913236b303ef 00:27:27.587 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:27.587 { 00:27:27.587 "name": "823c7b91-89fc-475e-bd8e-913236b303ef", 00:27:27.587 "aliases": [ 00:27:27.587 "lvs/nvme0n1p0" 00:27:27.587 ], 00:27:27.587 "product_name": "Logical Volume", 00:27:27.587 "block_size": 4096, 00:27:27.587 "num_blocks": 26476544, 00:27:27.587 "uuid": "823c7b91-89fc-475e-bd8e-913236b303ef", 00:27:27.587 "assigned_rate_limits": { 00:27:27.587 "rw_ios_per_sec": 0, 00:27:27.587 "rw_mbytes_per_sec": 0, 00:27:27.587 "r_mbytes_per_sec": 0, 00:27:27.587 "w_mbytes_per_sec": 0 00:27:27.587 }, 00:27:27.587 "claimed": false, 00:27:27.587 "zoned": false, 00:27:27.587 "supported_io_types": { 00:27:27.587 "read": true, 00:27:27.587 "write": true, 00:27:27.587 "unmap": true, 00:27:27.587 "flush": false, 00:27:27.587 "reset": true, 00:27:27.587 "nvme_admin": false, 00:27:27.587 "nvme_io": false, 00:27:27.587 "nvme_io_md": false, 00:27:27.587 "write_zeroes": true, 00:27:27.587 "zcopy": false, 00:27:27.587 "get_zone_info": false, 00:27:27.587 "zone_management": false, 00:27:27.587 "zone_append": false, 00:27:27.587 "compare": false, 00:27:27.587 "compare_and_write": false, 00:27:27.587 "abort": false, 00:27:27.587 "seek_hole": true, 00:27:27.587 "seek_data": true, 00:27:27.587 "copy": false, 00:27:27.587 "nvme_iov_md": false 00:27:27.587 }, 00:27:27.587 "driver_specific": { 00:27:27.587 "lvol": { 00:27:27.587 "lvol_store_uuid": "394a7b7c-6a78-42d2-9d1a-842eacbe82e5", 00:27:27.587 "base_bdev": "nvme0n1", 00:27:27.587 "thin_provision": true, 00:27:27.587 "num_allocated_clusters": 0, 00:27:27.587 "snapshot": false, 00:27:27.587 "clone": false, 00:27:27.587 "esnap_clone": false 00:27:27.587 } 00:27:27.587 } 00:27:27.587 } 00:27:27.587 ]' 00:27:27.587 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:27.587 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:27:27.587 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:27.846 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:27.846 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:27.846 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:27:27.846 12:29:37 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:27:27.846 12:29:37 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:27:27.846 12:29:37 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:28.105 12:29:37 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:28.105 12:29:37 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:28.105 12:29:37 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 823c7b91-89fc-475e-bd8e-913236b303ef 00:27:28.105 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=823c7b91-89fc-475e-bd8e-913236b303ef 00:27:28.105 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:28.105 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:27:28.105 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:27:28.105 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 823c7b91-89fc-475e-bd8e-913236b303ef 00:27:28.105 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:28.105 { 00:27:28.105 "name": "823c7b91-89fc-475e-bd8e-913236b303ef", 00:27:28.105 "aliases": [ 00:27:28.105 "lvs/nvme0n1p0" 00:27:28.105 ], 00:27:28.105 "product_name": "Logical Volume", 00:27:28.105 "block_size": 4096, 00:27:28.105 "num_blocks": 26476544, 00:27:28.105 "uuid": "823c7b91-89fc-475e-bd8e-913236b303ef", 00:27:28.105 "assigned_rate_limits": { 00:27:28.105 "rw_ios_per_sec": 0, 00:27:28.105 "rw_mbytes_per_sec": 0, 00:27:28.105 "r_mbytes_per_sec": 0, 00:27:28.105 "w_mbytes_per_sec": 0 00:27:28.105 }, 00:27:28.105 "claimed": false, 00:27:28.105 "zoned": false, 00:27:28.105 "supported_io_types": { 00:27:28.105 "read": true, 00:27:28.105 "write": true, 00:27:28.105 "unmap": true, 00:27:28.105 "flush": false, 00:27:28.105 "reset": true, 00:27:28.105 "nvme_admin": false, 00:27:28.105 "nvme_io": false, 00:27:28.105 "nvme_io_md": false, 00:27:28.105 "write_zeroes": true, 00:27:28.105 "zcopy": false, 00:27:28.105 "get_zone_info": false, 00:27:28.105 "zone_management": false, 00:27:28.105 "zone_append": false, 00:27:28.105 "compare": false, 00:27:28.105 "compare_and_write": false, 00:27:28.105 "abort": false, 00:27:28.105 "seek_hole": true, 00:27:28.105 "seek_data": true, 00:27:28.105 "copy": false, 00:27:28.105 "nvme_iov_md": false 00:27:28.105 }, 00:27:28.105 "driver_specific": { 00:27:28.105 "lvol": { 00:27:28.105 "lvol_store_uuid": "394a7b7c-6a78-42d2-9d1a-842eacbe82e5", 00:27:28.105 "base_bdev": "nvme0n1", 00:27:28.105 "thin_provision": true, 00:27:28.105 "num_allocated_clusters": 0, 00:27:28.105 "snapshot": false, 00:27:28.105 "clone": false, 00:27:28.105 "esnap_clone": false 00:27:28.105 } 00:27:28.105 } 00:27:28.105 } 00:27:28.105 ]' 00:27:28.105 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:28.106 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:27:28.106 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:28.365 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:28.365 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:28.365 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:27:28.365 12:29:37 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:27:28.365 12:29:37 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:28.365 12:29:37 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:27:28.365 12:29:37 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:27:28.365 12:29:37 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:27:28.365 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:27:28.365 12:29:37 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 823c7b91-89fc-475e-bd8e-913236b303ef 00:27:28.365 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=823c7b91-89fc-475e-bd8e-913236b303ef 00:27:28.365 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:28.365 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:27:28.365 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:27:28.365 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 823c7b91-89fc-475e-bd8e-913236b303ef 00:27:28.624 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:28.624 { 00:27:28.624 "name": "823c7b91-89fc-475e-bd8e-913236b303ef", 00:27:28.624 "aliases": [ 00:27:28.624 "lvs/nvme0n1p0" 00:27:28.624 ], 00:27:28.624 "product_name": "Logical Volume", 00:27:28.624 "block_size": 4096, 00:27:28.624 "num_blocks": 26476544, 00:27:28.624 "uuid": "823c7b91-89fc-475e-bd8e-913236b303ef", 00:27:28.624 "assigned_rate_limits": { 00:27:28.624 "rw_ios_per_sec": 0, 00:27:28.624 "rw_mbytes_per_sec": 0, 00:27:28.624 "r_mbytes_per_sec": 0, 00:27:28.624 "w_mbytes_per_sec": 0 00:27:28.624 }, 00:27:28.624 "claimed": false, 00:27:28.624 "zoned": false, 00:27:28.624 "supported_io_types": { 00:27:28.624 "read": true, 00:27:28.624 "write": true, 00:27:28.624 "unmap": true, 00:27:28.624 "flush": false, 00:27:28.624 "reset": true, 00:27:28.624 "nvme_admin": false, 00:27:28.624 "nvme_io": false, 00:27:28.624 "nvme_io_md": false, 00:27:28.624 "write_zeroes": true, 00:27:28.624 "zcopy": false, 00:27:28.624 "get_zone_info": false, 00:27:28.624 "zone_management": false, 00:27:28.624 "zone_append": false, 00:27:28.624 "compare": false, 00:27:28.624 "compare_and_write": false, 00:27:28.624 "abort": false, 00:27:28.624 "seek_hole": true, 00:27:28.624 "seek_data": true, 00:27:28.624 "copy": false, 00:27:28.624 "nvme_iov_md": false 00:27:28.624 }, 00:27:28.624 "driver_specific": { 00:27:28.624 "lvol": { 00:27:28.624 "lvol_store_uuid": "394a7b7c-6a78-42d2-9d1a-842eacbe82e5", 00:27:28.624 "base_bdev": "nvme0n1", 00:27:28.624 "thin_provision": true, 00:27:28.624 "num_allocated_clusters": 0, 00:27:28.624 "snapshot": false, 00:27:28.624 "clone": false, 00:27:28.624 "esnap_clone": false 00:27:28.624 } 00:27:28.624 } 00:27:28.624 } 00:27:28.624 ]' 00:27:28.624 12:29:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:28.624 12:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:27:28.624 12:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:28.624 12:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:28.625 12:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:28.625 12:29:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:27:28.625 12:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:27:28.625 12:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:27:28.625 12:29:38 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 823c7b91-89fc-475e-bd8e-913236b303ef -c nvc0n1p0 --l2p_dram_limit 60 00:27:28.885 [2024-07-10 12:29:38.254287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.885 [2024-07-10 12:29:38.254355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:28.885 [2024-07-10 12:29:38.254373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:28.885 [2024-07-10 12:29:38.254386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.885 [2024-07-10 12:29:38.254467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.885 [2024-07-10 12:29:38.254482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:28.885 [2024-07-10 12:29:38.254493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:28.885 [2024-07-10 12:29:38.254506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.885 [2024-07-10 12:29:38.254536] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:28.885 [2024-07-10 12:29:38.255698] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:28.885 [2024-07-10 12:29:38.255724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.885 [2024-07-10 12:29:38.255753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:28.885 [2024-07-10 12:29:38.255764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:27:28.885 [2024-07-10 12:29:38.255776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.885 [2024-07-10 12:29:38.255865] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c31e430a-d6a7-413d-b75b-156cd29bfabe 00:27:28.885 [2024-07-10 12:29:38.258035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.885 [2024-07-10 12:29:38.258072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:28.885 [2024-07-10 12:29:38.258089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:28.885 [2024-07-10 12:29:38.258099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.885 [2024-07-10 12:29:38.267535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.885 [2024-07-10 12:29:38.267568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:28.885 [2024-07-10 12:29:38.267584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.350 ms 00:27:28.885 [2024-07-10 12:29:38.267598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.885 [2024-07-10 12:29:38.267722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.885 [2024-07-10 12:29:38.267761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:28.885 [2024-07-10 12:29:38.267776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:27:28.885 [2024-07-10 12:29:38.267787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.885 [2024-07-10 12:29:38.267873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.885 [2024-07-10 12:29:38.267886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:28.885 [2024-07-10 12:29:38.267899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:28.885 [2024-07-10 12:29:38.267910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.885 [2024-07-10 12:29:38.267951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:28.885 [2024-07-10 12:29:38.273319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.885 [2024-07-10 12:29:38.273354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:28.885 [2024-07-10 12:29:38.273371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.387 ms 00:27:28.885 [2024-07-10 12:29:38.273384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.885 [2024-07-10 12:29:38.273431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.885 [2024-07-10 12:29:38.273445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:28.885 [2024-07-10 12:29:38.273456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:28.885 [2024-07-10 12:29:38.273469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.885 [2024-07-10 12:29:38.273514] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:28.885 [2024-07-10 12:29:38.273670] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:28.885 [2024-07-10 12:29:38.273687] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:28.885 [2024-07-10 12:29:38.273707] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:28.885 [2024-07-10 12:29:38.273720] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:28.885 [2024-07-10 12:29:38.273748] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:28.885 [2024-07-10 12:29:38.273760] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:28.885 [2024-07-10 12:29:38.273774] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:28.885 [2024-07-10 12:29:38.273784] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:28.885 [2024-07-10 12:29:38.273800] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:28.885 [2024-07-10 12:29:38.273811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.885 [2024-07-10 12:29:38.273823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:28.885 [2024-07-10 12:29:38.273834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:27:28.885 [2024-07-10 12:29:38.273847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.885 [2024-07-10 12:29:38.273930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.885 [2024-07-10 12:29:38.273943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:28.885 [2024-07-10 12:29:38.273954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:28.885 [2024-07-10 12:29:38.273966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.885 [2024-07-10 12:29:38.274066] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:28.885 [2024-07-10 12:29:38.274086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:28.885 [2024-07-10 12:29:38.274096] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:28.885 [2024-07-10 12:29:38.274110] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.885 [2024-07-10 12:29:38.274121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:28.885 [2024-07-10 12:29:38.274133] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:28.885 [2024-07-10 12:29:38.274143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:28.885 [2024-07-10 12:29:38.274154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:28.885 [2024-07-10 12:29:38.274168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:28.885 [2024-07-10 12:29:38.274179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:28.885 [2024-07-10 12:29:38.274189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:28.885 [2024-07-10 12:29:38.274201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:28.885 [2024-07-10 12:29:38.274211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:28.885 [2024-07-10 12:29:38.274224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:28.885 [2024-07-10 12:29:38.274234] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:28.885 [2024-07-10 12:29:38.274246] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.885 [2024-07-10 12:29:38.274256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:28.885 [2024-07-10 12:29:38.274270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:28.885 [2024-07-10 12:29:38.274279] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.885 [2024-07-10 12:29:38.274291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:28.885 [2024-07-10 12:29:38.274300] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:28.885 [2024-07-10 12:29:38.274312] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.885 [2024-07-10 12:29:38.274321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:28.885 [2024-07-10 12:29:38.274333] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:28.885 [2024-07-10 12:29:38.274342] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.885 [2024-07-10 12:29:38.274353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:28.885 [2024-07-10 12:29:38.274362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:28.885 [2024-07-10 12:29:38.274374] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.885 [2024-07-10 12:29:38.274383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:28.885 [2024-07-10 12:29:38.274395] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:28.885 [2024-07-10 12:29:38.274404] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.885 [2024-07-10 12:29:38.274415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:28.885 [2024-07-10 12:29:38.274424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:28.885 [2024-07-10 12:29:38.274438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:28.885 [2024-07-10 12:29:38.274446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:28.885 [2024-07-10 12:29:38.274458] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:28.885 [2024-07-10 12:29:38.274467] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:28.885 [2024-07-10 12:29:38.274478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:28.885 [2024-07-10 12:29:38.274487] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:28.885 [2024-07-10 12:29:38.274500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.885 [2024-07-10 12:29:38.274511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:28.885 [2024-07-10 12:29:38.274523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:28.885 [2024-07-10 12:29:38.274532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.885 [2024-07-10 12:29:38.274543] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:28.885 [2024-07-10 12:29:38.274553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:28.885 [2024-07-10 12:29:38.274582] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:28.885 [2024-07-10 12:29:38.274592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.885 [2024-07-10 12:29:38.274607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:28.885 [2024-07-10 12:29:38.274617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:28.885 [2024-07-10 12:29:38.274632] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:28.885 [2024-07-10 12:29:38.274641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:28.885 [2024-07-10 12:29:38.274653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:28.885 [2024-07-10 12:29:38.274663] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:28.886 [2024-07-10 12:29:38.274678] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:28.886 [2024-07-10 12:29:38.274691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:28.886 [2024-07-10 12:29:38.274705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:28.886 [2024-07-10 12:29:38.274716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:28.886 [2024-07-10 12:29:38.274743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:28.886 [2024-07-10 12:29:38.274755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:28.886 [2024-07-10 12:29:38.274768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:28.886 [2024-07-10 12:29:38.274779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:28.886 [2024-07-10 12:29:38.274792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:28.886 [2024-07-10 12:29:38.274803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:28.886 [2024-07-10 12:29:38.274817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:28.886 [2024-07-10 12:29:38.274828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:28.886 [2024-07-10 12:29:38.274843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:28.886 [2024-07-10 12:29:38.274854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:28.886 [2024-07-10 12:29:38.274867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:28.886 [2024-07-10 12:29:38.274877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:28.886 [2024-07-10 12:29:38.274890] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:28.886 [2024-07-10 12:29:38.274910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:28.886 [2024-07-10 12:29:38.274923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:28.886 [2024-07-10 12:29:38.274935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:28.886 [2024-07-10 12:29:38.274948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:28.886 [2024-07-10 12:29:38.274958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:28.886 [2024-07-10 12:29:38.274972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.886 [2024-07-10 12:29:38.274984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:28.886 [2024-07-10 12:29:38.274997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:27:28.886 [2024-07-10 12:29:38.275008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.886 [2024-07-10 12:29:38.275075] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:28.886 [2024-07-10 12:29:38.275087] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:33.079 [2024-07-10 12:29:42.164157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.079 [2024-07-10 12:29:42.164254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:33.079 [2024-07-10 12:29:42.164277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3895.382 ms 00:27:33.079 [2024-07-10 12:29:42.164289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.079 [2024-07-10 12:29:42.209806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.079 [2024-07-10 12:29:42.209867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:33.079 [2024-07-10 12:29:42.209887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.294 ms 00:27:33.079 [2024-07-10 12:29:42.209898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.079 [2024-07-10 12:29:42.210065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.079 [2024-07-10 12:29:42.210078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:33.079 [2024-07-10 12:29:42.210093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:33.079 [2024-07-10 12:29:42.210104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.079 [2024-07-10 12:29:42.277206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.079 [2024-07-10 12:29:42.277276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:33.079 [2024-07-10 12:29:42.277300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.146 ms 00:27:33.079 [2024-07-10 12:29:42.277314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.079 [2024-07-10 12:29:42.277378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.079 [2024-07-10 12:29:42.277392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:33.079 [2024-07-10 12:29:42.277410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:33.079 [2024-07-10 12:29:42.277423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.079 [2024-07-10 12:29:42.278299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.079 [2024-07-10 12:29:42.278318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:33.079 [2024-07-10 12:29:42.278340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:27:33.079 [2024-07-10 12:29:42.278353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.079 [2024-07-10 12:29:42.278509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.079 [2024-07-10 12:29:42.278525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:33.079 [2024-07-10 12:29:42.278542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:27:33.079 [2024-07-10 12:29:42.278555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.079 [2024-07-10 12:29:42.309151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.079 [2024-07-10 12:29:42.309203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:33.079 [2024-07-10 12:29:42.309222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.606 ms 00:27:33.079 [2024-07-10 12:29:42.309233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.079 [2024-07-10 12:29:42.324216] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:33.079 [2024-07-10 12:29:42.350694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.079 [2024-07-10 12:29:42.350775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:33.079 [2024-07-10 12:29:42.350797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.391 ms 00:27:33.079 [2024-07-10 12:29:42.350811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.079 [2024-07-10 12:29:42.443686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.079 [2024-07-10 12:29:42.443776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:33.079 [2024-07-10 12:29:42.443795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.951 ms 00:27:33.079 [2024-07-10 12:29:42.443809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.079 [2024-07-10 12:29:42.444039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.079 [2024-07-10 12:29:42.444063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:33.079 [2024-07-10 12:29:42.444076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:27:33.079 [2024-07-10 12:29:42.444092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.079 [2024-07-10 12:29:42.482219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.079 [2024-07-10 12:29:42.482281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:33.079 [2024-07-10 12:29:42.482298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.123 ms 00:27:33.079 [2024-07-10 12:29:42.482311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.079 [2024-07-10 12:29:42.519710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.079 [2024-07-10 12:29:42.519764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:33.079 [2024-07-10 12:29:42.519781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.406 ms 00:27:33.079 [2024-07-10 12:29:42.519808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.079 [2024-07-10 12:29:42.520618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.079 [2024-07-10 12:29:42.520656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:33.079 [2024-07-10 12:29:42.520669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:27:33.079 [2024-07-10 12:29:42.520683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.339 [2024-07-10 12:29:42.632050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.339 [2024-07-10 12:29:42.632124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:33.339 [2024-07-10 12:29:42.632146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.459 ms 00:27:33.339 [2024-07-10 12:29:42.632164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.339 [2024-07-10 12:29:42.673591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.339 [2024-07-10 12:29:42.673650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:33.339 [2024-07-10 12:29:42.673667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.435 ms 00:27:33.339 [2024-07-10 12:29:42.673682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.339 [2024-07-10 12:29:42.712787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.339 [2024-07-10 12:29:42.712838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:33.339 [2024-07-10 12:29:42.712854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.094 ms 00:27:33.339 [2024-07-10 12:29:42.712867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.339 [2024-07-10 12:29:42.751093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.339 [2024-07-10 12:29:42.751144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:33.339 [2024-07-10 12:29:42.751161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.234 ms 00:27:33.339 [2024-07-10 12:29:42.751174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.339 [2024-07-10 12:29:42.751245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.339 [2024-07-10 12:29:42.751262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:33.339 [2024-07-10 12:29:42.751278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:33.339 [2024-07-10 12:29:42.751295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.339 [2024-07-10 12:29:42.751432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.339 [2024-07-10 12:29:42.751452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:33.339 [2024-07-10 12:29:42.751463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:33.339 [2024-07-10 12:29:42.751476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.339 [2024-07-10 12:29:42.752871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4505.413 ms, result 0 00:27:33.339 { 00:27:33.339 "name": "ftl0", 00:27:33.339 "uuid": "c31e430a-d6a7-413d-b75b-156cd29bfabe" 00:27:33.339 } 00:27:33.339 12:29:42 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:27:33.339 12:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:27:33.339 12:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:33.339 12:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:27:33.339 12:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:33.339 12:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:33.339 12:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:33.598 12:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:27:33.857 [ 00:27:33.857 { 00:27:33.857 "name": "ftl0", 00:27:33.857 "aliases": [ 00:27:33.857 "c31e430a-d6a7-413d-b75b-156cd29bfabe" 00:27:33.857 ], 00:27:33.857 "product_name": "FTL disk", 00:27:33.857 "block_size": 4096, 00:27:33.857 "num_blocks": 20971520, 00:27:33.857 "uuid": "c31e430a-d6a7-413d-b75b-156cd29bfabe", 00:27:33.857 "assigned_rate_limits": { 00:27:33.857 "rw_ios_per_sec": 0, 00:27:33.857 "rw_mbytes_per_sec": 0, 00:27:33.857 "r_mbytes_per_sec": 0, 00:27:33.857 "w_mbytes_per_sec": 0 00:27:33.857 }, 00:27:33.857 "claimed": false, 00:27:33.857 "zoned": false, 00:27:33.857 "supported_io_types": { 00:27:33.857 "read": true, 00:27:33.857 "write": true, 00:27:33.857 "unmap": true, 00:27:33.857 "flush": true, 00:27:33.857 "reset": false, 00:27:33.857 "nvme_admin": false, 00:27:33.857 "nvme_io": false, 00:27:33.857 "nvme_io_md": false, 00:27:33.857 "write_zeroes": true, 00:27:33.857 "zcopy": false, 00:27:33.857 "get_zone_info": false, 00:27:33.857 "zone_management": false, 00:27:33.857 "zone_append": false, 00:27:33.857 "compare": false, 00:27:33.857 "compare_and_write": false, 00:27:33.857 "abort": false, 00:27:33.857 "seek_hole": false, 00:27:33.857 "seek_data": false, 00:27:33.857 "copy": false, 00:27:33.857 "nvme_iov_md": false 00:27:33.857 }, 00:27:33.857 "driver_specific": { 00:27:33.857 "ftl": { 00:27:33.857 "base_bdev": "823c7b91-89fc-475e-bd8e-913236b303ef", 00:27:33.857 "cache": "nvc0n1p0" 00:27:33.857 } 00:27:33.857 } 00:27:33.857 } 00:27:33.857 ] 00:27:33.857 12:29:43 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:27:33.857 12:29:43 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:27:33.857 12:29:43 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:33.857 12:29:43 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:27:33.857 12:29:43 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:34.116 [2024-07-10 12:29:43.481555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.116 [2024-07-10 12:29:43.481614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:34.116 [2024-07-10 12:29:43.481634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:34.116 [2024-07-10 12:29:43.481651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.117 [2024-07-10 12:29:43.481714] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:34.117 [2024-07-10 12:29:43.485799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.117 [2024-07-10 12:29:43.485838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:34.117 [2024-07-10 12:29:43.485852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.058 ms 00:27:34.117 [2024-07-10 12:29:43.485865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.117 [2024-07-10 12:29:43.486754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.117 [2024-07-10 12:29:43.486782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:34.117 [2024-07-10 12:29:43.486794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:27:34.117 [2024-07-10 12:29:43.486807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.117 [2024-07-10 12:29:43.489358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.117 [2024-07-10 12:29:43.489383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:34.117 [2024-07-10 12:29:43.489395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.512 ms 00:27:34.117 [2024-07-10 12:29:43.489407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.117 [2024-07-10 12:29:43.494465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.117 [2024-07-10 12:29:43.494503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:34.117 [2024-07-10 12:29:43.494516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.006 ms 00:27:34.117 [2024-07-10 12:29:43.494528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.117 [2024-07-10 12:29:43.532865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.117 [2024-07-10 12:29:43.532913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:34.117 [2024-07-10 12:29:43.532930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.258 ms 00:27:34.117 [2024-07-10 12:29:43.532943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.117 [2024-07-10 12:29:43.555548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.117 [2024-07-10 12:29:43.555603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:34.117 [2024-07-10 12:29:43.555639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.579 ms 00:27:34.117 [2024-07-10 12:29:43.555653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.117 [2024-07-10 12:29:43.556000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.117 [2024-07-10 12:29:43.556018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:34.117 [2024-07-10 12:29:43.556029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:27:34.117 [2024-07-10 12:29:43.556042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.117 [2024-07-10 12:29:43.595045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.117 [2024-07-10 12:29:43.595095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:34.117 [2024-07-10 12:29:43.595111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.001 ms 00:27:34.117 [2024-07-10 12:29:43.595124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.377 [2024-07-10 12:29:43.632954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.377 [2024-07-10 12:29:43.632998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:34.377 [2024-07-10 12:29:43.633013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.821 ms 00:27:34.377 [2024-07-10 12:29:43.633025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.377 [2024-07-10 12:29:43.670764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.377 [2024-07-10 12:29:43.670806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:34.377 [2024-07-10 12:29:43.670836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.730 ms 00:27:34.377 [2024-07-10 12:29:43.670849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.377 [2024-07-10 12:29:43.711018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.377 [2024-07-10 12:29:43.711065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:34.377 [2024-07-10 12:29:43.711080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.060 ms 00:27:34.377 [2024-07-10 12:29:43.711093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.377 [2024-07-10 12:29:43.711158] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:34.377 [2024-07-10 12:29:43.711181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:34.377 [2024-07-10 12:29:43.711477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.711989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:34.378 [2024-07-10 12:29:43.712426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:34.379 [2024-07-10 12:29:43.712438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:34.379 [2024-07-10 12:29:43.712451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:34.379 [2024-07-10 12:29:43.712463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:34.379 [2024-07-10 12:29:43.712486] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:34.379 [2024-07-10 12:29:43.712498] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c31e430a-d6a7-413d-b75b-156cd29bfabe 00:27:34.379 [2024-07-10 12:29:43.712511] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:34.379 [2024-07-10 12:29:43.712521] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:34.379 [2024-07-10 12:29:43.712539] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:34.379 [2024-07-10 12:29:43.712550] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:34.379 [2024-07-10 12:29:43.712562] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:34.379 [2024-07-10 12:29:43.712573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:34.379 [2024-07-10 12:29:43.712585] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:34.379 [2024-07-10 12:29:43.712595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:34.379 [2024-07-10 12:29:43.712606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:34.379 [2024-07-10 12:29:43.712617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.379 [2024-07-10 12:29:43.712630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:34.379 [2024-07-10 12:29:43.712641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.463 ms 00:27:34.379 [2024-07-10 12:29:43.712653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.379 [2024-07-10 12:29:43.735205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.379 [2024-07-10 12:29:43.735239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:34.379 [2024-07-10 12:29:43.735253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.494 ms 00:27:34.379 [2024-07-10 12:29:43.735266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.379 [2024-07-10 12:29:43.735910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.379 [2024-07-10 12:29:43.735931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:34.379 [2024-07-10 12:29:43.735943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:27:34.379 [2024-07-10 12:29:43.735956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.379 [2024-07-10 12:29:43.814252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.379 [2024-07-10 12:29:43.814312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:34.379 [2024-07-10 12:29:43.814328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.379 [2024-07-10 12:29:43.814342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.379 [2024-07-10 12:29:43.814442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.379 [2024-07-10 12:29:43.814456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:34.379 [2024-07-10 12:29:43.814468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.379 [2024-07-10 12:29:43.814480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.379 [2024-07-10 12:29:43.814619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.379 [2024-07-10 12:29:43.814641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:34.379 [2024-07-10 12:29:43.814652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.379 [2024-07-10 12:29:43.814666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.379 [2024-07-10 12:29:43.814706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.379 [2024-07-10 12:29:43.814723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:34.379 [2024-07-10 12:29:43.814756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.379 [2024-07-10 12:29:43.814769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.638 [2024-07-10 12:29:43.950382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.638 [2024-07-10 12:29:43.950452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:34.638 [2024-07-10 12:29:43.950467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.638 [2024-07-10 12:29:43.950480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.638 [2024-07-10 12:29:44.052016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.638 [2024-07-10 12:29:44.052098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:34.638 [2024-07-10 12:29:44.052115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.638 [2024-07-10 12:29:44.052129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.638 [2024-07-10 12:29:44.052256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.638 [2024-07-10 12:29:44.052272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:34.638 [2024-07-10 12:29:44.052288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.638 [2024-07-10 12:29:44.052301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.638 [2024-07-10 12:29:44.052416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.638 [2024-07-10 12:29:44.052434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:34.638 [2024-07-10 12:29:44.052445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.638 [2024-07-10 12:29:44.052458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.638 [2024-07-10 12:29:44.052598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.638 [2024-07-10 12:29:44.052614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:34.638 [2024-07-10 12:29:44.052628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.638 [2024-07-10 12:29:44.052641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.638 [2024-07-10 12:29:44.052708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.638 [2024-07-10 12:29:44.052724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:34.638 [2024-07-10 12:29:44.052752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.638 [2024-07-10 12:29:44.052766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.638 [2024-07-10 12:29:44.052836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.638 [2024-07-10 12:29:44.052850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:34.638 [2024-07-10 12:29:44.052861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.638 [2024-07-10 12:29:44.052877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.638 [2024-07-10 12:29:44.052944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.638 [2024-07-10 12:29:44.052962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:34.638 [2024-07-10 12:29:44.052973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.638 [2024-07-10 12:29:44.052986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.638 [2024-07-10 12:29:44.053250] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 572.587 ms, result 0 00:27:34.638 true 00:27:34.638 12:29:44 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 79739 00:27:34.638 12:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 79739 ']' 00:27:34.638 12:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 79739 00:27:34.638 12:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:27:34.638 12:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:34.638 12:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79739 00:27:34.638 12:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:34.638 killing process with pid 79739 00:27:34.638 12:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:34.638 12:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79739' 00:27:34.638 12:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 79739 00:27:34.638 12:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 79739 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:39.939 12:29:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:27:39.939 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:27:39.939 fio-3.35 00:27:39.939 Starting 1 thread 00:27:45.208 00:27:45.208 test: (groupid=0, jobs=1): err= 0: pid=79963: Wed Jul 10 12:29:54 2024 00:27:45.208 read: IOPS=962, BW=63.9MiB/s (67.0MB/s)(255MiB/3982msec) 00:27:45.208 slat (nsec): min=4281, max=29978, avg=6196.01, stdev=2280.26 00:27:45.208 clat (usec): min=267, max=1004, avg=472.32, stdev=58.05 00:27:45.208 lat (usec): min=273, max=1009, avg=478.52, stdev=58.32 00:27:45.208 clat percentiles (usec): 00:27:45.208 | 1.00th=[ 322], 5.00th=[ 388], 10.00th=[ 392], 20.00th=[ 424], 00:27:45.208 | 30.00th=[ 457], 40.00th=[ 457], 50.00th=[ 461], 60.00th=[ 478], 00:27:45.208 | 70.00th=[ 523], 80.00th=[ 529], 90.00th=[ 529], 95.00th=[ 537], 00:27:45.208 | 99.00th=[ 603], 99.50th=[ 635], 99.90th=[ 701], 99.95th=[ 783], 00:27:45.208 | 99.99th=[ 1004] 00:27:45.208 write: IOPS=969, BW=64.4MiB/s (67.5MB/s)(256MiB/3978msec); 0 zone resets 00:27:45.208 slat (usec): min=15, max=143, avg=19.30, stdev= 4.99 00:27:45.208 clat (usec): min=348, max=937, avg=527.06, stdev=71.84 00:27:45.208 lat (usec): min=372, max=955, avg=546.36, stdev=72.02 00:27:45.208 clat percentiles (usec): 00:27:45.208 | 1.00th=[ 404], 5.00th=[ 412], 10.00th=[ 465], 20.00th=[ 478], 00:27:45.208 | 30.00th=[ 482], 40.00th=[ 494], 50.00th=[ 537], 60.00th=[ 545], 00:27:45.208 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 611], 95.00th=[ 619], 00:27:45.208 | 99.00th=[ 824], 99.50th=[ 865], 99.90th=[ 930], 99.95th=[ 930], 00:27:45.208 | 99.99th=[ 938] 00:27:45.208 bw ( KiB/s): min=64199, max=68000, per=100.00%, avg=66019.29, stdev=1214.31, samples=7 00:27:45.208 iops : min= 944, max= 1000, avg=970.86, stdev=17.88, samples=7 00:27:45.208 lat (usec) : 500=51.96%, 750=47.03%, 1000=1.00% 00:27:45.208 lat (msec) : 2=0.01% 00:27:45.208 cpu : usr=98.77%, sys=0.28%, ctx=12, majf=0, minf=1171 00:27:45.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:45.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.208 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:45.208 00:27:45.208 Run status group 0 (all jobs): 00:27:45.208 READ: bw=63.9MiB/s (67.0MB/s), 63.9MiB/s-63.9MiB/s (67.0MB/s-67.0MB/s), io=255MiB (267MB), run=3982-3982msec 00:27:45.208 WRITE: bw=64.4MiB/s (67.5MB/s), 64.4MiB/s-64.4MiB/s (67.5MB/s-67.5MB/s), io=256MiB (269MB), run=3978-3978msec 00:27:47.109 ----------------------------------------------------- 00:27:47.109 Suppressions used: 00:27:47.109 count bytes template 00:27:47.109 1 5 /usr/src/fio/parse.c 00:27:47.109 1 8 libtcmalloc_minimal.so 00:27:47.109 1 904 libcrypto.so 00:27:47.109 ----------------------------------------------------- 00:27:47.109 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:47.109 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:47.110 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:27:47.110 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:47.110 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:47.110 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:47.110 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:27:47.110 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:47.110 12:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:27:47.368 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:27:47.368 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:27:47.368 fio-3.35 00:27:47.368 Starting 2 threads 00:28:14.037 00:28:14.037 first_half: (groupid=0, jobs=1): err= 0: pid=80067: Wed Jul 10 12:30:22 2024 00:28:14.037 read: IOPS=2693, BW=10.5MiB/s (11.0MB/s)(255MiB/24243msec) 00:28:14.037 slat (nsec): min=3494, max=32929, avg=6310.82, stdev=2105.36 00:28:14.037 clat (usec): min=898, max=276223, avg=37456.30, stdev=18707.40 00:28:14.037 lat (usec): min=906, max=276228, avg=37462.61, stdev=18707.62 00:28:14.037 clat percentiles (msec): 00:28:14.037 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 32], 00:28:14.037 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:28:14.037 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 43], 95.00th=[ 61], 00:28:14.037 | 99.00th=[ 144], 99.50th=[ 167], 99.90th=[ 190], 99.95th=[ 234], 00:28:14.037 | 99.99th=[ 268] 00:28:14.037 write: IOPS=3234, BW=12.6MiB/s (13.2MB/s)(256MiB/20263msec); 0 zone resets 00:28:14.037 slat (usec): min=4, max=710, avg= 7.99, stdev= 7.49 00:28:14.037 clat (usec): min=367, max=103079, avg=9999.91, stdev=16420.99 00:28:14.037 lat (usec): min=381, max=103086, avg=10007.91, stdev=16421.08 00:28:14.037 clat percentiles (usec): 00:28:14.037 | 1.00th=[ 988], 5.00th=[ 1303], 10.00th=[ 1565], 20.00th=[ 1958], 00:28:14.037 | 30.00th=[ 3359], 40.00th=[ 4752], 50.00th=[ 5735], 60.00th=[ 6587], 00:28:14.037 | 70.00th=[ 7767], 80.00th=[ 10814], 90.00th=[ 13698], 95.00th=[ 41681], 00:28:14.037 | 99.00th=[ 82314], 99.50th=[ 85459], 99.90th=[ 99091], 99.95th=[101188], 00:28:14.037 | 99.99th=[102237] 00:28:14.037 bw ( KiB/s): min= 152, max=41600, per=96.51%, avg=22795.13, stdev=14687.93, samples=23 00:28:14.037 iops : min= 38, max=10400, avg=5698.78, stdev=3671.98, samples=23 00:28:14.037 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.46% 00:28:14.037 lat (msec) : 2=9.94%, 4=7.43%, 10=21.34%, 20=7.61%, 50=46.84% 00:28:14.037 lat (msec) : 100=5.17%, 250=1.11%, 500=0.02% 00:28:14.037 cpu : usr=99.15%, sys=0.24%, ctx=55, majf=0, minf=5601 00:28:14.037 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:14.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.038 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:14.038 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:14.038 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:14.038 second_half: (groupid=0, jobs=1): err= 0: pid=80068: Wed Jul 10 12:30:22 2024 00:28:14.038 read: IOPS=2672, BW=10.4MiB/s (10.9MB/s)(255MiB/24435msec) 00:28:14.038 slat (nsec): min=3370, max=32003, avg=6081.79, stdev=1919.20 00:28:14.038 clat (usec): min=928, max=282683, avg=36703.34, stdev=21356.02 00:28:14.038 lat (usec): min=936, max=282691, avg=36709.42, stdev=21356.27 00:28:14.038 clat percentiles (msec): 00:28:14.038 | 1.00th=[ 9], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:28:14.038 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:28:14.038 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 39], 95.00th=[ 53], 00:28:14.038 | 99.00th=[ 159], 99.50th=[ 180], 99.90th=[ 201], 99.95th=[ 230], 00:28:14.038 | 99.99th=[ 275] 00:28:14.038 write: IOPS=2952, BW=11.5MiB/s (12.1MB/s)(256MiB/22197msec); 0 zone resets 00:28:14.038 slat (usec): min=4, max=389, avg= 7.76, stdev= 4.35 00:28:14.038 clat (usec): min=479, max=103425, avg=11123.33, stdev=18010.90 00:28:14.038 lat (usec): min=485, max=103432, avg=11131.09, stdev=18011.03 00:28:14.038 clat percentiles (usec): 00:28:14.038 | 1.00th=[ 971], 5.00th=[ 1237], 10.00th=[ 1450], 20.00th=[ 1729], 00:28:14.038 | 30.00th=[ 2057], 40.00th=[ 3326], 50.00th=[ 5080], 60.00th=[ 6587], 00:28:14.038 | 70.00th=[ 8356], 80.00th=[ 12125], 90.00th=[ 33424], 95.00th=[ 60031], 00:28:14.038 | 99.00th=[ 83362], 99.50th=[ 86508], 99.90th=[ 99091], 99.95th=[101188], 00:28:14.038 | 99.99th=[103285] 00:28:14.038 bw ( KiB/s): min= 2272, max=48360, per=92.49%, avg=21845.33, stdev=12709.24, samples=24 00:28:14.038 iops : min= 568, max=12090, avg=5461.33, stdev=3177.31, samples=24 00:28:14.038 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.54% 00:28:14.038 lat (msec) : 2=13.80%, 4=8.03%, 10=16.09%, 20=7.25%, 50=48.92% 00:28:14.038 lat (msec) : 100=3.82%, 250=1.47%, 500=0.01% 00:28:14.038 cpu : usr=99.28%, sys=0.18%, ctx=31, majf=0, minf=5510 00:28:14.038 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:14.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.038 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:14.038 issued rwts: total=65314,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:14.038 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:14.038 00:28:14.038 Run status group 0 (all jobs): 00:28:14.038 READ: bw=20.9MiB/s (21.9MB/s), 10.4MiB/s-10.5MiB/s (10.9MB/s-11.0MB/s), io=510MiB (535MB), run=24243-24435msec 00:28:14.038 WRITE: bw=23.1MiB/s (24.2MB/s), 11.5MiB/s-12.6MiB/s (12.1MB/s-13.2MB/s), io=512MiB (537MB), run=20263-22197msec 00:28:15.941 ----------------------------------------------------- 00:28:15.941 Suppressions used: 00:28:15.941 count bytes template 00:28:15.941 2 10 /usr/src/fio/parse.c 00:28:15.941 3 288 /usr/src/fio/iolog.c 00:28:15.941 1 8 libtcmalloc_minimal.so 00:28:15.941 1 904 libcrypto.so 00:28:15.941 ----------------------------------------------------- 00:28:15.941 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:15.941 12:30:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:28:16.201 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:28:16.201 fio-3.35 00:28:16.201 Starting 1 thread 00:28:31.109 00:28:31.109 test: (groupid=0, jobs=1): err= 0: pid=80393: Wed Jul 10 12:30:40 2024 00:28:31.109 read: IOPS=7945, BW=31.0MiB/s (32.5MB/s)(255MiB/8206msec) 00:28:31.109 slat (nsec): min=3370, max=69390, avg=5241.69, stdev=1782.88 00:28:31.109 clat (usec): min=612, max=31343, avg=16101.17, stdev=1036.06 00:28:31.109 lat (usec): min=616, max=31350, avg=16106.41, stdev=1036.07 00:28:31.109 clat percentiles (usec): 00:28:31.109 | 1.00th=[15008], 5.00th=[15270], 10.00th=[15401], 20.00th=[15533], 00:28:31.109 | 30.00th=[15664], 40.00th=[15795], 50.00th=[16057], 60.00th=[16188], 00:28:31.109 | 70.00th=[16319], 80.00th=[16450], 90.00th=[16712], 95.00th=[16909], 00:28:31.109 | 99.00th=[18744], 99.50th=[22676], 99.90th=[27919], 99.95th=[28705], 00:28:31.109 | 99.99th=[30802] 00:28:31.109 write: IOPS=12.7k, BW=49.6MiB/s (52.1MB/s)(256MiB/5157msec); 0 zone resets 00:28:31.109 slat (usec): min=4, max=1239, avg= 8.23, stdev= 7.45 00:28:31.109 clat (usec): min=517, max=54873, avg=10018.68, stdev=12414.51 00:28:31.109 lat (usec): min=523, max=54881, avg=10026.91, stdev=12414.60 00:28:31.109 clat percentiles (usec): 00:28:31.109 | 1.00th=[ 889], 5.00th=[ 1106], 10.00th=[ 1237], 20.00th=[ 1450], 00:28:31.109 | 30.00th=[ 1680], 40.00th=[ 2311], 50.00th=[ 6521], 60.00th=[ 7701], 00:28:31.109 | 70.00th=[ 8717], 80.00th=[10814], 90.00th=[34866], 95.00th=[37487], 00:28:31.109 | 99.00th=[47449], 99.50th=[50070], 99.90th=[52691], 99.95th=[53216], 00:28:31.109 | 99.99th=[54264] 00:28:31.109 bw ( KiB/s): min=12168, max=66592, per=93.76%, avg=47662.55, stdev=14266.81, samples=11 00:28:31.109 iops : min= 3042, max=16648, avg=11915.64, stdev=3566.70, samples=11 00:28:31.109 lat (usec) : 750=0.13%, 1000=1.04% 00:28:31.109 lat (msec) : 2=17.54%, 4=2.33%, 10=17.05%, 20=53.55%, 50=8.12% 00:28:31.109 lat (msec) : 100=0.23% 00:28:31.109 cpu : usr=98.69%, sys=0.57%, ctx=20, majf=0, minf=5567 00:28:31.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:31.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.109 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:31.109 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:31.109 00:28:31.109 Run status group 0 (all jobs): 00:28:31.109 READ: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=255MiB (267MB), run=8206-8206msec 00:28:31.109 WRITE: bw=49.6MiB/s (52.1MB/s), 49.6MiB/s-49.6MiB/s (52.1MB/s-52.1MB/s), io=256MiB (268MB), run=5157-5157msec 00:28:33.064 ----------------------------------------------------- 00:28:33.064 Suppressions used: 00:28:33.064 count bytes template 00:28:33.064 1 5 /usr/src/fio/parse.c 00:28:33.064 2 192 /usr/src/fio/iolog.c 00:28:33.064 1 8 libtcmalloc_minimal.so 00:28:33.064 1 904 libcrypto.so 00:28:33.064 ----------------------------------------------------- 00:28:33.064 00:28:33.064 12:30:42 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:28:33.064 12:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:33.064 12:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:28:33.064 12:30:42 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:33.064 12:30:42 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:28:33.064 Remove shared memory files 00:28:33.064 12:30:42 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:33.064 12:30:42 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:28:33.064 12:30:42 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:28:33.064 12:30:42 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62102 /dev/shm/spdk_tgt_trace.pid78654 00:28:33.064 12:30:42 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:33.064 12:30:42 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:28:33.064 00:28:33.064 real 1m8.267s 00:28:33.064 user 2m26.338s 00:28:33.064 sys 0m3.806s 00:28:33.064 12:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:33.065 ************************************ 00:28:33.065 END TEST ftl_fio_basic 00:28:33.065 ************************************ 00:28:33.065 12:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:28:33.065 12:30:42 ftl -- common/autotest_common.sh@1142 -- # return 0 00:28:33.065 12:30:42 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:28:33.065 12:30:42 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:28:33.065 12:30:42 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.065 12:30:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:33.065 ************************************ 00:28:33.065 START TEST ftl_bdevperf 00:28:33.065 ************************************ 00:28:33.065 12:30:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:28:33.065 * Looking for test storage... 00:28:33.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=80627 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 80627 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 80627 ']' 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:33.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:33.323 12:30:42 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.323 [2024-07-10 12:30:42.685577] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:28:33.323 [2024-07-10 12:30:42.685711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80627 ] 00:28:33.581 [2024-07-10 12:30:42.857667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.839 [2024-07-10 12:30:43.095362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.098 12:30:43 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:34.098 12:30:43 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:28:34.098 12:30:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:34.098 12:30:43 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:28:34.098 12:30:43 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:34.098 12:30:43 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:28:34.098 12:30:43 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:28:34.098 12:30:43 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:34.357 12:30:43 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:34.357 12:30:43 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:28:34.357 12:30:43 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:34.357 12:30:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:34.357 12:30:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:34.357 12:30:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:28:34.357 12:30:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:28:34.357 12:30:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:34.617 12:30:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:34.617 { 00:28:34.617 "name": "nvme0n1", 00:28:34.617 "aliases": [ 00:28:34.617 "e2bf0229-2172-42d3-9959-21563714f8ad" 00:28:34.617 ], 00:28:34.617 "product_name": "NVMe disk", 00:28:34.617 "block_size": 4096, 00:28:34.617 "num_blocks": 1310720, 00:28:34.617 "uuid": "e2bf0229-2172-42d3-9959-21563714f8ad", 00:28:34.617 "assigned_rate_limits": { 00:28:34.617 "rw_ios_per_sec": 0, 00:28:34.617 "rw_mbytes_per_sec": 0, 00:28:34.617 "r_mbytes_per_sec": 0, 00:28:34.617 "w_mbytes_per_sec": 0 00:28:34.617 }, 00:28:34.617 "claimed": true, 00:28:34.617 "claim_type": "read_many_write_one", 00:28:34.617 "zoned": false, 00:28:34.617 "supported_io_types": { 00:28:34.617 "read": true, 00:28:34.617 "write": true, 00:28:34.617 "unmap": true, 00:28:34.617 "flush": true, 00:28:34.617 "reset": true, 00:28:34.617 "nvme_admin": true, 00:28:34.617 "nvme_io": true, 00:28:34.617 "nvme_io_md": false, 00:28:34.617 "write_zeroes": true, 00:28:34.617 "zcopy": false, 00:28:34.617 "get_zone_info": false, 00:28:34.617 "zone_management": false, 00:28:34.617 "zone_append": false, 00:28:34.617 "compare": true, 00:28:34.617 "compare_and_write": false, 00:28:34.617 "abort": true, 00:28:34.617 "seek_hole": false, 00:28:34.617 "seek_data": false, 00:28:34.617 "copy": true, 00:28:34.617 "nvme_iov_md": false 00:28:34.617 }, 00:28:34.617 "driver_specific": { 00:28:34.617 "nvme": [ 00:28:34.617 { 00:28:34.617 "pci_address": "0000:00:11.0", 00:28:34.617 "trid": { 00:28:34.617 "trtype": "PCIe", 00:28:34.617 "traddr": "0000:00:11.0" 00:28:34.617 }, 00:28:34.617 "ctrlr_data": { 00:28:34.617 "cntlid": 0, 00:28:34.617 "vendor_id": "0x1b36", 00:28:34.617 "model_number": "QEMU NVMe Ctrl", 00:28:34.617 "serial_number": "12341", 00:28:34.617 "firmware_revision": "8.0.0", 00:28:34.617 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:34.617 "oacs": { 00:28:34.617 "security": 0, 00:28:34.617 "format": 1, 00:28:34.617 "firmware": 0, 00:28:34.617 "ns_manage": 1 00:28:34.617 }, 00:28:34.617 "multi_ctrlr": false, 00:28:34.617 "ana_reporting": false 00:28:34.617 }, 00:28:34.617 "vs": { 00:28:34.617 "nvme_version": "1.4" 00:28:34.617 }, 00:28:34.617 "ns_data": { 00:28:34.617 "id": 1, 00:28:34.617 "can_share": false 00:28:34.617 } 00:28:34.617 } 00:28:34.617 ], 00:28:34.617 "mp_policy": "active_passive" 00:28:34.617 } 00:28:34.617 } 00:28:34.617 ]' 00:28:34.617 12:30:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:34.617 12:30:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:28:34.617 12:30:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:34.617 12:30:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:34.617 12:30:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:34.617 12:30:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:28:34.617 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:28:34.617 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:34.617 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:28:34.617 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:34.617 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:34.877 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=394a7b7c-6a78-42d2-9d1a-842eacbe82e5 00:28:34.877 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:28:34.877 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 394a7b7c-6a78-42d2-9d1a-842eacbe82e5 00:28:35.136 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:35.395 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=df0af10f-79f8-4521-840e-1acdd4879a50 00:28:35.395 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u df0af10f-79f8-4521-840e-1acdd4879a50 00:28:35.395 12:30:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=5c02a353-360e-47d9-8d0f-2454fb3db49e 00:28:35.395 12:30:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5c02a353-360e-47d9-8d0f-2454fb3db49e 00:28:35.395 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:28:35.395 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:35.395 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=5c02a353-360e-47d9-8d0f-2454fb3db49e 00:28:35.395 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:28:35.395 12:30:44 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 5c02a353-360e-47d9-8d0f-2454fb3db49e 00:28:35.395 12:30:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=5c02a353-360e-47d9-8d0f-2454fb3db49e 00:28:35.395 12:30:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:35.395 12:30:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:28:35.395 12:30:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:28:35.395 12:30:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5c02a353-360e-47d9-8d0f-2454fb3db49e 00:28:35.654 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:35.654 { 00:28:35.654 "name": "5c02a353-360e-47d9-8d0f-2454fb3db49e", 00:28:35.654 "aliases": [ 00:28:35.654 "lvs/nvme0n1p0" 00:28:35.654 ], 00:28:35.654 "product_name": "Logical Volume", 00:28:35.654 "block_size": 4096, 00:28:35.654 "num_blocks": 26476544, 00:28:35.654 "uuid": "5c02a353-360e-47d9-8d0f-2454fb3db49e", 00:28:35.654 "assigned_rate_limits": { 00:28:35.654 "rw_ios_per_sec": 0, 00:28:35.654 "rw_mbytes_per_sec": 0, 00:28:35.654 "r_mbytes_per_sec": 0, 00:28:35.654 "w_mbytes_per_sec": 0 00:28:35.654 }, 00:28:35.654 "claimed": false, 00:28:35.654 "zoned": false, 00:28:35.654 "supported_io_types": { 00:28:35.654 "read": true, 00:28:35.654 "write": true, 00:28:35.654 "unmap": true, 00:28:35.654 "flush": false, 00:28:35.654 "reset": true, 00:28:35.654 "nvme_admin": false, 00:28:35.654 "nvme_io": false, 00:28:35.654 "nvme_io_md": false, 00:28:35.654 "write_zeroes": true, 00:28:35.654 "zcopy": false, 00:28:35.654 "get_zone_info": false, 00:28:35.654 "zone_management": false, 00:28:35.654 "zone_append": false, 00:28:35.654 "compare": false, 00:28:35.654 "compare_and_write": false, 00:28:35.654 "abort": false, 00:28:35.654 "seek_hole": true, 00:28:35.654 "seek_data": true, 00:28:35.654 "copy": false, 00:28:35.654 "nvme_iov_md": false 00:28:35.654 }, 00:28:35.654 "driver_specific": { 00:28:35.654 "lvol": { 00:28:35.654 "lvol_store_uuid": "df0af10f-79f8-4521-840e-1acdd4879a50", 00:28:35.654 "base_bdev": "nvme0n1", 00:28:35.654 "thin_provision": true, 00:28:35.654 "num_allocated_clusters": 0, 00:28:35.654 "snapshot": false, 00:28:35.654 "clone": false, 00:28:35.654 "esnap_clone": false 00:28:35.654 } 00:28:35.654 } 00:28:35.654 } 00:28:35.654 ]' 00:28:35.654 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:35.654 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:28:35.654 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:35.654 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:35.654 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:35.654 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:28:35.654 12:30:45 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:28:35.654 12:30:45 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:28:35.913 12:30:45 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:36.173 12:30:45 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:36.173 12:30:45 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:36.173 12:30:45 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 5c02a353-360e-47d9-8d0f-2454fb3db49e 00:28:36.173 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=5c02a353-360e-47d9-8d0f-2454fb3db49e 00:28:36.173 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:36.173 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:28:36.173 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:28:36.173 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5c02a353-360e-47d9-8d0f-2454fb3db49e 00:28:36.173 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:36.173 { 00:28:36.173 "name": "5c02a353-360e-47d9-8d0f-2454fb3db49e", 00:28:36.173 "aliases": [ 00:28:36.173 "lvs/nvme0n1p0" 00:28:36.173 ], 00:28:36.173 "product_name": "Logical Volume", 00:28:36.173 "block_size": 4096, 00:28:36.173 "num_blocks": 26476544, 00:28:36.173 "uuid": "5c02a353-360e-47d9-8d0f-2454fb3db49e", 00:28:36.173 "assigned_rate_limits": { 00:28:36.173 "rw_ios_per_sec": 0, 00:28:36.173 "rw_mbytes_per_sec": 0, 00:28:36.173 "r_mbytes_per_sec": 0, 00:28:36.173 "w_mbytes_per_sec": 0 00:28:36.173 }, 00:28:36.173 "claimed": false, 00:28:36.173 "zoned": false, 00:28:36.173 "supported_io_types": { 00:28:36.173 "read": true, 00:28:36.173 "write": true, 00:28:36.173 "unmap": true, 00:28:36.173 "flush": false, 00:28:36.173 "reset": true, 00:28:36.173 "nvme_admin": false, 00:28:36.173 "nvme_io": false, 00:28:36.173 "nvme_io_md": false, 00:28:36.173 "write_zeroes": true, 00:28:36.173 "zcopy": false, 00:28:36.173 "get_zone_info": false, 00:28:36.173 "zone_management": false, 00:28:36.173 "zone_append": false, 00:28:36.173 "compare": false, 00:28:36.173 "compare_and_write": false, 00:28:36.173 "abort": false, 00:28:36.173 "seek_hole": true, 00:28:36.173 "seek_data": true, 00:28:36.173 "copy": false, 00:28:36.173 "nvme_iov_md": false 00:28:36.173 }, 00:28:36.173 "driver_specific": { 00:28:36.173 "lvol": { 00:28:36.173 "lvol_store_uuid": "df0af10f-79f8-4521-840e-1acdd4879a50", 00:28:36.173 "base_bdev": "nvme0n1", 00:28:36.173 "thin_provision": true, 00:28:36.173 "num_allocated_clusters": 0, 00:28:36.173 "snapshot": false, 00:28:36.173 "clone": false, 00:28:36.173 "esnap_clone": false 00:28:36.173 } 00:28:36.173 } 00:28:36.173 } 00:28:36.173 ]' 00:28:36.173 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:36.173 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:28:36.173 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:36.433 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:36.433 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:36.433 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:28:36.433 12:30:45 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:28:36.433 12:30:45 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:36.433 12:30:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:28:36.433 12:30:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 5c02a353-360e-47d9-8d0f-2454fb3db49e 00:28:36.433 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=5c02a353-360e-47d9-8d0f-2454fb3db49e 00:28:36.433 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:36.433 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:28:36.433 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:28:36.433 12:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5c02a353-360e-47d9-8d0f-2454fb3db49e 00:28:36.692 12:30:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:36.692 { 00:28:36.692 "name": "5c02a353-360e-47d9-8d0f-2454fb3db49e", 00:28:36.692 "aliases": [ 00:28:36.692 "lvs/nvme0n1p0" 00:28:36.692 ], 00:28:36.692 "product_name": "Logical Volume", 00:28:36.692 "block_size": 4096, 00:28:36.692 "num_blocks": 26476544, 00:28:36.692 "uuid": "5c02a353-360e-47d9-8d0f-2454fb3db49e", 00:28:36.692 "assigned_rate_limits": { 00:28:36.692 "rw_ios_per_sec": 0, 00:28:36.692 "rw_mbytes_per_sec": 0, 00:28:36.692 "r_mbytes_per_sec": 0, 00:28:36.692 "w_mbytes_per_sec": 0 00:28:36.692 }, 00:28:36.692 "claimed": false, 00:28:36.692 "zoned": false, 00:28:36.692 "supported_io_types": { 00:28:36.692 "read": true, 00:28:36.692 "write": true, 00:28:36.692 "unmap": true, 00:28:36.692 "flush": false, 00:28:36.692 "reset": true, 00:28:36.692 "nvme_admin": false, 00:28:36.692 "nvme_io": false, 00:28:36.692 "nvme_io_md": false, 00:28:36.692 "write_zeroes": true, 00:28:36.692 "zcopy": false, 00:28:36.692 "get_zone_info": false, 00:28:36.692 "zone_management": false, 00:28:36.692 "zone_append": false, 00:28:36.692 "compare": false, 00:28:36.692 "compare_and_write": false, 00:28:36.692 "abort": false, 00:28:36.692 "seek_hole": true, 00:28:36.692 "seek_data": true, 00:28:36.692 "copy": false, 00:28:36.692 "nvme_iov_md": false 00:28:36.692 }, 00:28:36.692 "driver_specific": { 00:28:36.692 "lvol": { 00:28:36.692 "lvol_store_uuid": "df0af10f-79f8-4521-840e-1acdd4879a50", 00:28:36.692 "base_bdev": "nvme0n1", 00:28:36.692 "thin_provision": true, 00:28:36.692 "num_allocated_clusters": 0, 00:28:36.692 "snapshot": false, 00:28:36.692 "clone": false, 00:28:36.692 "esnap_clone": false 00:28:36.692 } 00:28:36.692 } 00:28:36.692 } 00:28:36.692 ]' 00:28:36.692 12:30:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:36.692 12:30:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:28:36.692 12:30:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:36.692 12:30:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:36.692 12:30:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:36.692 12:30:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:28:36.692 12:30:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:28:36.692 12:30:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5c02a353-360e-47d9-8d0f-2454fb3db49e -c nvc0n1p0 --l2p_dram_limit 20 00:28:36.953 [2024-07-10 12:30:46.325708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.953 [2024-07-10 12:30:46.325784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:36.953 [2024-07-10 12:30:46.325806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:36.953 [2024-07-10 12:30:46.325817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.953 [2024-07-10 12:30:46.325891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.953 [2024-07-10 12:30:46.325904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:36.953 [2024-07-10 12:30:46.325917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:36.953 [2024-07-10 12:30:46.325931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.953 [2024-07-10 12:30:46.325954] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:36.953 [2024-07-10 12:30:46.327208] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:36.953 [2024-07-10 12:30:46.327243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.953 [2024-07-10 12:30:46.327258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:36.953 [2024-07-10 12:30:46.327272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:28:36.953 [2024-07-10 12:30:46.327283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.953 [2024-07-10 12:30:46.327364] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 83695a1b-e466-473e-962f-bc64eb2042f1 00:28:36.953 [2024-07-10 12:30:46.328891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.953 [2024-07-10 12:30:46.328932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:36.953 [2024-07-10 12:30:46.328945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:36.953 [2024-07-10 12:30:46.328962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.953 [2024-07-10 12:30:46.336617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.953 [2024-07-10 12:30:46.336657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:36.953 [2024-07-10 12:30:46.336671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.623 ms 00:28:36.953 [2024-07-10 12:30:46.336684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.953 [2024-07-10 12:30:46.336818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.953 [2024-07-10 12:30:46.336848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:36.953 [2024-07-10 12:30:46.336874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:28:36.953 [2024-07-10 12:30:46.336889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.953 [2024-07-10 12:30:46.336974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.953 [2024-07-10 12:30:46.337005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:36.953 [2024-07-10 12:30:46.337023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:36.953 [2024-07-10 12:30:46.337036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.953 [2024-07-10 12:30:46.337063] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:36.953 [2024-07-10 12:30:46.342877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.953 [2024-07-10 12:30:46.342912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:36.953 [2024-07-10 12:30:46.342928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.829 ms 00:28:36.953 [2024-07-10 12:30:46.342940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.953 [2024-07-10 12:30:46.342982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.953 [2024-07-10 12:30:46.342998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:36.953 [2024-07-10 12:30:46.343012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:36.953 [2024-07-10 12:30:46.343022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.953 [2024-07-10 12:30:46.343063] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:36.953 [2024-07-10 12:30:46.343217] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:36.953 [2024-07-10 12:30:46.343249] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:36.953 [2024-07-10 12:30:46.343264] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:36.953 [2024-07-10 12:30:46.343287] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:36.953 [2024-07-10 12:30:46.343308] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:36.953 [2024-07-10 12:30:46.343331] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:36.953 [2024-07-10 12:30:46.343346] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:36.953 [2024-07-10 12:30:46.343361] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:36.953 [2024-07-10 12:30:46.343370] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:36.953 [2024-07-10 12:30:46.343385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.953 [2024-07-10 12:30:46.343396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:36.953 [2024-07-10 12:30:46.343409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:28:36.953 [2024-07-10 12:30:46.343425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.953 [2024-07-10 12:30:46.343500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.953 [2024-07-10 12:30:46.343514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:36.953 [2024-07-10 12:30:46.343532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:36.953 [2024-07-10 12:30:46.343548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.953 [2024-07-10 12:30:46.343654] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:36.953 [2024-07-10 12:30:46.343673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:36.953 [2024-07-10 12:30:46.343687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:36.953 [2024-07-10 12:30:46.343697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.953 [2024-07-10 12:30:46.343713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:36.953 [2024-07-10 12:30:46.343722] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:36.953 [2024-07-10 12:30:46.343747] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:36.953 [2024-07-10 12:30:46.343757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:36.953 [2024-07-10 12:30:46.343769] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:36.953 [2024-07-10 12:30:46.343780] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:36.953 [2024-07-10 12:30:46.343792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:36.953 [2024-07-10 12:30:46.343801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:36.953 [2024-07-10 12:30:46.343813] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:36.953 [2024-07-10 12:30:46.343822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:36.953 [2024-07-10 12:30:46.343843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:36.953 [2024-07-10 12:30:46.343859] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.953 [2024-07-10 12:30:46.343883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:36.953 [2024-07-10 12:30:46.343900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:36.953 [2024-07-10 12:30:46.343928] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.953 [2024-07-10 12:30:46.343938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:36.953 [2024-07-10 12:30:46.343950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:36.953 [2024-07-10 12:30:46.343959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.953 [2024-07-10 12:30:46.343971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:36.953 [2024-07-10 12:30:46.343980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:36.953 [2024-07-10 12:30:46.343991] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.953 [2024-07-10 12:30:46.344001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:36.953 [2024-07-10 12:30:46.344020] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:36.953 [2024-07-10 12:30:46.344035] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.953 [2024-07-10 12:30:46.344054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:36.953 [2024-07-10 12:30:46.344074] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:36.954 [2024-07-10 12:30:46.344087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.954 [2024-07-10 12:30:46.344102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:36.954 [2024-07-10 12:30:46.344126] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:36.954 [2024-07-10 12:30:46.344140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:36.954 [2024-07-10 12:30:46.344153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:36.954 [2024-07-10 12:30:46.344162] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:36.954 [2024-07-10 12:30:46.344174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:36.954 [2024-07-10 12:30:46.344185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:36.954 [2024-07-10 12:30:46.344206] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:36.954 [2024-07-10 12:30:46.344223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.954 [2024-07-10 12:30:46.344236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:36.954 [2024-07-10 12:30:46.344247] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:36.954 [2024-07-10 12:30:46.344259] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.954 [2024-07-10 12:30:46.344268] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:36.954 [2024-07-10 12:30:46.344282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:36.954 [2024-07-10 12:30:46.344292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:36.954 [2024-07-10 12:30:46.344305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.954 [2024-07-10 12:30:46.344316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:36.954 [2024-07-10 12:30:46.344332] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:36.954 [2024-07-10 12:30:46.344347] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:36.954 [2024-07-10 12:30:46.344366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:36.954 [2024-07-10 12:30:46.344383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:36.954 [2024-07-10 12:30:46.344405] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:36.954 [2024-07-10 12:30:46.344426] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:36.954 [2024-07-10 12:30:46.344443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:36.954 [2024-07-10 12:30:46.344455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:36.954 [2024-07-10 12:30:46.344469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:36.954 [2024-07-10 12:30:46.344480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:36.954 [2024-07-10 12:30:46.344493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:36.954 [2024-07-10 12:30:46.344503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:36.954 [2024-07-10 12:30:46.344516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:36.954 [2024-07-10 12:30:46.344527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:36.954 [2024-07-10 12:30:46.344539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:36.954 [2024-07-10 12:30:46.344549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:36.954 [2024-07-10 12:30:46.344566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:36.954 [2024-07-10 12:30:46.344580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:36.954 [2024-07-10 12:30:46.344602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:36.954 [2024-07-10 12:30:46.344614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:36.954 [2024-07-10 12:30:46.344628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:36.954 [2024-07-10 12:30:46.344638] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:36.954 [2024-07-10 12:30:46.344652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:36.954 [2024-07-10 12:30:46.344664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:36.954 [2024-07-10 12:30:46.344677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:36.954 [2024-07-10 12:30:46.344688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:36.954 [2024-07-10 12:30:46.344700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:36.954 [2024-07-10 12:30:46.344714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.954 [2024-07-10 12:30:46.344738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:36.954 [2024-07-10 12:30:46.344753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:28:36.954 [2024-07-10 12:30:46.344766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.954 [2024-07-10 12:30:46.344812] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:36.954 [2024-07-10 12:30:46.344833] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:43.548 [2024-07-10 12:30:52.175356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.548 [2024-07-10 12:30:52.175429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:43.548 [2024-07-10 12:30:52.175447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5840.007 ms 00:28:43.548 [2024-07-10 12:30:52.175464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.548 [2024-07-10 12:30:52.223963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.548 [2024-07-10 12:30:52.224020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:43.548 [2024-07-10 12:30:52.224044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.239 ms 00:28:43.548 [2024-07-10 12:30:52.224067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.548 [2024-07-10 12:30:52.224237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.548 [2024-07-10 12:30:52.224257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:43.548 [2024-07-10 12:30:52.224271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:43.548 [2024-07-10 12:30:52.224290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.548 [2024-07-10 12:30:52.276823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.548 [2024-07-10 12:30:52.276877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:43.548 [2024-07-10 12:30:52.276894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.571 ms 00:28:43.548 [2024-07-10 12:30:52.276908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.548 [2024-07-10 12:30:52.276954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.548 [2024-07-10 12:30:52.276981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:43.548 [2024-07-10 12:30:52.276998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:43.548 [2024-07-10 12:30:52.277020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.548 [2024-07-10 12:30:52.277530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.548 [2024-07-10 12:30:52.277559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:43.548 [2024-07-10 12:30:52.277572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:28:43.548 [2024-07-10 12:30:52.277585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.548 [2024-07-10 12:30:52.277697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.549 [2024-07-10 12:30:52.277717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:43.549 [2024-07-10 12:30:52.277739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:28:43.549 [2024-07-10 12:30:52.277755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.549 [2024-07-10 12:30:52.298075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.549 [2024-07-10 12:30:52.298122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:43.549 [2024-07-10 12:30:52.298139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.319 ms 00:28:43.549 [2024-07-10 12:30:52.298152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.549 [2024-07-10 12:30:52.312363] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:28:43.549 [2024-07-10 12:30:52.318362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.549 [2024-07-10 12:30:52.318401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:43.549 [2024-07-10 12:30:52.318421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.116 ms 00:28:43.549 [2024-07-10 12:30:52.318432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.549 [2024-07-10 12:30:52.405160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.549 [2024-07-10 12:30:52.405252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:43.549 [2024-07-10 12:30:52.405274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.820 ms 00:28:43.549 [2024-07-10 12:30:52.405285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.549 [2024-07-10 12:30:52.405487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.549 [2024-07-10 12:30:52.405500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:43.549 [2024-07-10 12:30:52.405517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:28:43.549 [2024-07-10 12:30:52.405528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.549 [2024-07-10 12:30:52.443614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.549 [2024-07-10 12:30:52.443663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:43.549 [2024-07-10 12:30:52.443683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.089 ms 00:28:43.549 [2024-07-10 12:30:52.443694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.549 [2024-07-10 12:30:52.483219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.549 [2024-07-10 12:30:52.483269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:43.549 [2024-07-10 12:30:52.483290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.534 ms 00:28:43.549 [2024-07-10 12:30:52.483300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.549 [2024-07-10 12:30:52.484083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.549 [2024-07-10 12:30:52.484118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:43.549 [2024-07-10 12:30:52.484134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:28:43.549 [2024-07-10 12:30:52.484144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.549 [2024-07-10 12:30:52.600114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.549 [2024-07-10 12:30:52.600186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:43.549 [2024-07-10 12:30:52.600211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.097 ms 00:28:43.549 [2024-07-10 12:30:52.600222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.549 [2024-07-10 12:30:52.640110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.549 [2024-07-10 12:30:52.640161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:43.549 [2024-07-10 12:30:52.640181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.905 ms 00:28:43.549 [2024-07-10 12:30:52.640207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.549 [2024-07-10 12:30:52.679299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.549 [2024-07-10 12:30:52.679363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:43.549 [2024-07-10 12:30:52.679398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.105 ms 00:28:43.549 [2024-07-10 12:30:52.679408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.549 [2024-07-10 12:30:52.717772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.549 [2024-07-10 12:30:52.717815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:43.549 [2024-07-10 12:30:52.717834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.379 ms 00:28:43.549 [2024-07-10 12:30:52.717845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.549 [2024-07-10 12:30:52.717898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.549 [2024-07-10 12:30:52.717910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:43.549 [2024-07-10 12:30:52.717927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:43.549 [2024-07-10 12:30:52.717938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.549 [2024-07-10 12:30:52.718037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.549 [2024-07-10 12:30:52.718052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:43.549 [2024-07-10 12:30:52.718072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:43.549 [2024-07-10 12:30:52.718087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.549 [2024-07-10 12:30:52.719126] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 6403.328 ms, result 0 00:28:43.549 { 00:28:43.549 "name": "ftl0", 00:28:43.549 "uuid": "83695a1b-e466-473e-962f-bc64eb2042f1" 00:28:43.549 } 00:28:43.549 12:30:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:28:43.549 12:30:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:28:43.549 12:30:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:28:43.549 12:30:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:28:43.807 [2024-07-10 12:30:53.071292] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:28:43.807 I/O size of 69632 is greater than zero copy threshold (65536). 00:28:43.807 Zero copy mechanism will not be used. 00:28:43.807 Running I/O for 4 seconds... 00:28:47.993 00:28:47.993 Latency(us) 00:28:47.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.993 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:28:47.993 ftl0 : 4.00 1652.20 109.72 0.00 0.00 633.39 218.78 1237.02 00:28:47.993 =================================================================================================================== 00:28:47.993 Total : 1652.20 109.72 0.00 0.00 633.39 218.78 1237.02 00:28:47.993 0 00:28:47.993 [2024-07-10 12:30:57.074844] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:28:47.993 12:30:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:28:47.993 [2024-07-10 12:30:57.177480] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:28:47.993 Running I/O for 4 seconds... 00:28:52.176 00:28:52.176 Latency(us) 00:28:52.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.176 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.176 ftl0 : 4.01 10072.86 39.35 0.00 0.00 12683.03 236.88 68220.61 00:28:52.176 =================================================================================================================== 00:28:52.176 Total : 10072.86 39.35 0.00 0.00 12683.03 0.00 68220.61 00:28:52.176 [2024-07-10 12:31:01.194636] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:28:52.176 0 00:28:52.176 12:31:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:28:52.176 [2024-07-10 12:31:01.323508] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:28:52.177 Running I/O for 4 seconds... 00:28:56.372 00:28:56.372 Latency(us) 00:28:56.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.372 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:56.372 Verification LBA range: start 0x0 length 0x1400000 00:28:56.372 ftl0 : 4.01 8209.52 32.07 0.00 0.00 15545.08 273.07 18739.61 00:28:56.372 =================================================================================================================== 00:28:56.372 Total : 8209.52 32.07 0.00 0.00 15545.08 0.00 18739.61 00:28:56.372 [2024-07-10 12:31:05.346455] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:28:56.372 0 00:28:56.372 12:31:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:28:56.372 [2024-07-10 12:31:05.542775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.372 [2024-07-10 12:31:05.542836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:56.372 [2024-07-10 12:31:05.542857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:56.372 [2024-07-10 12:31:05.542867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.372 [2024-07-10 12:31:05.542900] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:56.372 [2024-07-10 12:31:05.546823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.372 [2024-07-10 12:31:05.546860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:56.372 [2024-07-10 12:31:05.546875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.909 ms 00:28:56.372 [2024-07-10 12:31:05.546891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.372 [2024-07-10 12:31:05.548801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.372 [2024-07-10 12:31:05.548847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:56.372 [2024-07-10 12:31:05.548861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.887 ms 00:28:56.372 [2024-07-10 12:31:05.548874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.372 [2024-07-10 12:31:05.756985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.372 [2024-07-10 12:31:05.757076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:56.372 [2024-07-10 12:31:05.757100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 208.424 ms 00:28:56.372 [2024-07-10 12:31:05.757119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.372 [2024-07-10 12:31:05.762135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.372 [2024-07-10 12:31:05.762173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:56.372 [2024-07-10 12:31:05.762185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.972 ms 00:28:56.372 [2024-07-10 12:31:05.762198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.372 [2024-07-10 12:31:05.802575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.372 [2024-07-10 12:31:05.802622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:56.372 [2024-07-10 12:31:05.802637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.377 ms 00:28:56.372 [2024-07-10 12:31:05.802649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.372 [2024-07-10 12:31:05.826027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.372 [2024-07-10 12:31:05.826074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:56.372 [2024-07-10 12:31:05.826090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.377 ms 00:28:56.372 [2024-07-10 12:31:05.826106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.372 [2024-07-10 12:31:05.826250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.372 [2024-07-10 12:31:05.826266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:56.372 [2024-07-10 12:31:05.826277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:28:56.372 [2024-07-10 12:31:05.826293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.632 [2024-07-10 12:31:05.865850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.632 [2024-07-10 12:31:05.865901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:56.632 [2024-07-10 12:31:05.865918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.603 ms 00:28:56.632 [2024-07-10 12:31:05.865930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.632 [2024-07-10 12:31:05.904631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.632 [2024-07-10 12:31:05.904675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:56.632 [2024-07-10 12:31:05.904689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.722 ms 00:28:56.632 [2024-07-10 12:31:05.904703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.632 [2024-07-10 12:31:05.941928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.632 [2024-07-10 12:31:05.941965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:56.632 [2024-07-10 12:31:05.941979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.233 ms 00:28:56.632 [2024-07-10 12:31:05.941991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.632 [2024-07-10 12:31:05.980749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.632 [2024-07-10 12:31:05.980790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:56.632 [2024-07-10 12:31:05.980804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.733 ms 00:28:56.632 [2024-07-10 12:31:05.980820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.632 [2024-07-10 12:31:05.980859] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:56.632 [2024-07-10 12:31:05.980880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.980893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.980907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.980919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.980933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.980944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.980958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.980969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.980983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.980994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:56.632 [2024-07-10 12:31:05.981181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.981997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.982008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.982021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:56.633 [2024-07-10 12:31:05.982032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:56.634 [2024-07-10 12:31:05.982045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:56.634 [2024-07-10 12:31:05.982055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:56.634 [2024-07-10 12:31:05.982069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:56.634 [2024-07-10 12:31:05.982079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:56.634 [2024-07-10 12:31:05.982095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:56.634 [2024-07-10 12:31:05.982107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:56.634 [2024-07-10 12:31:05.982121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:56.634 [2024-07-10 12:31:05.982132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:56.634 [2024-07-10 12:31:05.982145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:56.634 [2024-07-10 12:31:05.982156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:56.634 [2024-07-10 12:31:05.982177] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:56.634 [2024-07-10 12:31:05.982188] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 83695a1b-e466-473e-962f-bc64eb2042f1 00:28:56.634 [2024-07-10 12:31:05.982202] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:56.634 [2024-07-10 12:31:05.982211] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:56.634 [2024-07-10 12:31:05.982223] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:56.634 [2024-07-10 12:31:05.982235] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:56.634 [2024-07-10 12:31:05.982250] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:56.634 [2024-07-10 12:31:05.982261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:56.634 [2024-07-10 12:31:05.982274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:56.634 [2024-07-10 12:31:05.982283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:56.634 [2024-07-10 12:31:05.982297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:56.634 [2024-07-10 12:31:05.982306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.634 [2024-07-10 12:31:05.982319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:56.634 [2024-07-10 12:31:05.982331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:28:56.634 [2024-07-10 12:31:05.982343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.634 [2024-07-10 12:31:06.003356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.634 [2024-07-10 12:31:06.003396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:56.634 [2024-07-10 12:31:06.003413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.997 ms 00:28:56.634 [2024-07-10 12:31:06.003425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.634 [2024-07-10 12:31:06.003931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.634 [2024-07-10 12:31:06.003950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:56.634 [2024-07-10 12:31:06.003962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:28:56.634 [2024-07-10 12:31:06.003976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.634 [2024-07-10 12:31:06.053883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.634 [2024-07-10 12:31:06.053936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:56.634 [2024-07-10 12:31:06.053951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.634 [2024-07-10 12:31:06.053983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.634 [2024-07-10 12:31:06.054045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.634 [2024-07-10 12:31:06.054059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:56.634 [2024-07-10 12:31:06.054069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.634 [2024-07-10 12:31:06.054082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.634 [2024-07-10 12:31:06.054170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.634 [2024-07-10 12:31:06.054187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:56.634 [2024-07-10 12:31:06.054199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.634 [2024-07-10 12:31:06.054215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.634 [2024-07-10 12:31:06.054233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.634 [2024-07-10 12:31:06.054246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:56.634 [2024-07-10 12:31:06.054256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.634 [2024-07-10 12:31:06.054269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.894 [2024-07-10 12:31:06.173301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.894 [2024-07-10 12:31:06.173363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:56.894 [2024-07-10 12:31:06.173399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.894 [2024-07-10 12:31:06.173416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.894 [2024-07-10 12:31:06.277208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.894 [2024-07-10 12:31:06.277273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:56.894 [2024-07-10 12:31:06.277290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.894 [2024-07-10 12:31:06.277304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.894 [2024-07-10 12:31:06.277408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.894 [2024-07-10 12:31:06.277424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:56.894 [2024-07-10 12:31:06.277436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.894 [2024-07-10 12:31:06.277448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.894 [2024-07-10 12:31:06.277498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.894 [2024-07-10 12:31:06.277512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:56.894 [2024-07-10 12:31:06.277522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.894 [2024-07-10 12:31:06.277536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.894 [2024-07-10 12:31:06.277657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.894 [2024-07-10 12:31:06.277674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:56.894 [2024-07-10 12:31:06.277685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.894 [2024-07-10 12:31:06.277701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.894 [2024-07-10 12:31:06.277758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.894 [2024-07-10 12:31:06.277775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:56.894 [2024-07-10 12:31:06.277785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.894 [2024-07-10 12:31:06.277798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.894 [2024-07-10 12:31:06.277841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.894 [2024-07-10 12:31:06.277855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:56.894 [2024-07-10 12:31:06.277865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.894 [2024-07-10 12:31:06.277878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.894 [2024-07-10 12:31:06.277928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.894 [2024-07-10 12:31:06.277942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:56.894 [2024-07-10 12:31:06.277953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.894 [2024-07-10 12:31:06.277966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.894 [2024-07-10 12:31:06.278100] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 736.513 ms, result 0 00:28:56.894 true 00:28:56.894 12:31:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 80627 00:28:56.894 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 80627 ']' 00:28:56.894 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 80627 00:28:56.894 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:28:56.894 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:56.894 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80627 00:28:56.894 killing process with pid 80627 00:28:56.894 Received shutdown signal, test time was about 4.000000 seconds 00:28:56.894 00:28:56.894 Latency(us) 00:28:56.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.894 =================================================================================================================== 00:28:56.894 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:56.894 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:56.894 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:56.894 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80627' 00:28:56.894 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 80627 00:28:56.894 12:31:06 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 80627 00:28:58.273 12:31:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:28:58.273 12:31:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:28:58.273 12:31:07 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:58.273 12:31:07 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.533 12:31:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:28:58.533 Remove shared memory files 00:28:58.533 12:31:07 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:58.533 12:31:07 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:28:58.533 12:31:07 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:28:58.533 12:31:07 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:28:58.533 12:31:07 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:58.533 12:31:07 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:28:58.533 ************************************ 00:28:58.533 END TEST ftl_bdevperf 00:28:58.533 ************************************ 00:28:58.533 00:28:58.533 real 0m25.357s 00:28:58.533 user 0m27.696s 00:28:58.533 sys 0m1.245s 00:28:58.533 12:31:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:58.533 12:31:07 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.533 12:31:07 ftl -- common/autotest_common.sh@1142 -- # return 0 00:28:58.533 12:31:07 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:28:58.533 12:31:07 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:28:58.533 12:31:07 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:58.533 12:31:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:58.533 ************************************ 00:28:58.533 START TEST ftl_trim 00:28:58.533 ************************************ 00:28:58.533 12:31:07 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:28:58.533 * Looking for test storage... 00:28:58.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:58.533 12:31:07 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:58.533 12:31:07 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:58.534 12:31:07 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=81008 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:28:58.534 12:31:08 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 81008 00:28:58.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.793 12:31:08 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81008 ']' 00:28:58.793 12:31:08 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.793 12:31:08 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:58.793 12:31:08 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.793 12:31:08 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:58.793 12:31:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:58.793 [2024-07-10 12:31:08.119139] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:28:58.793 [2024-07-10 12:31:08.119283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81008 ] 00:28:59.053 [2024-07-10 12:31:08.293929] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:59.312 [2024-07-10 12:31:08.540924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.312 [2024-07-10 12:31:08.540976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.312 [2024-07-10 12:31:08.541014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.249 12:31:09 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:00.249 12:31:09 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:29:00.249 12:31:09 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:00.249 12:31:09 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:29:00.249 12:31:09 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:00.249 12:31:09 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:29:00.249 12:31:09 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:29:00.249 12:31:09 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:00.506 12:31:09 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:00.506 12:31:09 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:29:00.506 12:31:09 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:00.506 12:31:09 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:29:00.506 12:31:09 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:00.506 12:31:09 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:29:00.506 12:31:09 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:29:00.506 12:31:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:00.506 12:31:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:00.506 { 00:29:00.506 "name": "nvme0n1", 00:29:00.506 "aliases": [ 00:29:00.506 "f4c33660-48c1-4e18-bb1f-709e2a9758f5" 00:29:00.506 ], 00:29:00.506 "product_name": "NVMe disk", 00:29:00.506 "block_size": 4096, 00:29:00.506 "num_blocks": 1310720, 00:29:00.506 "uuid": "f4c33660-48c1-4e18-bb1f-709e2a9758f5", 00:29:00.506 "assigned_rate_limits": { 00:29:00.506 "rw_ios_per_sec": 0, 00:29:00.506 "rw_mbytes_per_sec": 0, 00:29:00.506 "r_mbytes_per_sec": 0, 00:29:00.506 "w_mbytes_per_sec": 0 00:29:00.506 }, 00:29:00.506 "claimed": true, 00:29:00.506 "claim_type": "read_many_write_one", 00:29:00.506 "zoned": false, 00:29:00.506 "supported_io_types": { 00:29:00.506 "read": true, 00:29:00.506 "write": true, 00:29:00.506 "unmap": true, 00:29:00.506 "flush": true, 00:29:00.506 "reset": true, 00:29:00.506 "nvme_admin": true, 00:29:00.506 "nvme_io": true, 00:29:00.506 "nvme_io_md": false, 00:29:00.506 "write_zeroes": true, 00:29:00.506 "zcopy": false, 00:29:00.506 "get_zone_info": false, 00:29:00.506 "zone_management": false, 00:29:00.506 "zone_append": false, 00:29:00.506 "compare": true, 00:29:00.506 "compare_and_write": false, 00:29:00.506 "abort": true, 00:29:00.506 "seek_hole": false, 00:29:00.506 "seek_data": false, 00:29:00.506 "copy": true, 00:29:00.506 "nvme_iov_md": false 00:29:00.506 }, 00:29:00.506 "driver_specific": { 00:29:00.506 "nvme": [ 00:29:00.506 { 00:29:00.506 "pci_address": "0000:00:11.0", 00:29:00.506 "trid": { 00:29:00.506 "trtype": "PCIe", 00:29:00.506 "traddr": "0000:00:11.0" 00:29:00.506 }, 00:29:00.506 "ctrlr_data": { 00:29:00.506 "cntlid": 0, 00:29:00.506 "vendor_id": "0x1b36", 00:29:00.506 "model_number": "QEMU NVMe Ctrl", 00:29:00.506 "serial_number": "12341", 00:29:00.506 "firmware_revision": "8.0.0", 00:29:00.506 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:00.506 "oacs": { 00:29:00.506 "security": 0, 00:29:00.506 "format": 1, 00:29:00.506 "firmware": 0, 00:29:00.506 "ns_manage": 1 00:29:00.506 }, 00:29:00.506 "multi_ctrlr": false, 00:29:00.506 "ana_reporting": false 00:29:00.506 }, 00:29:00.506 "vs": { 00:29:00.507 "nvme_version": "1.4" 00:29:00.507 }, 00:29:00.507 "ns_data": { 00:29:00.507 "id": 1, 00:29:00.507 "can_share": false 00:29:00.507 } 00:29:00.507 } 00:29:00.507 ], 00:29:00.507 "mp_policy": "active_passive" 00:29:00.507 } 00:29:00.507 } 00:29:00.507 ]' 00:29:00.507 12:31:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:00.766 12:31:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:29:00.766 12:31:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:00.766 12:31:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:29:00.766 12:31:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:29:00.766 12:31:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:29:00.766 12:31:10 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:29:00.766 12:31:10 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:00.766 12:31:10 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:29:00.766 12:31:10 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:00.766 12:31:10 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:00.766 12:31:10 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=df0af10f-79f8-4521-840e-1acdd4879a50 00:29:00.766 12:31:10 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:29:00.766 12:31:10 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u df0af10f-79f8-4521-840e-1acdd4879a50 00:29:01.025 12:31:10 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:01.284 12:31:10 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=65f5c800-8d08-4229-b5ae-86f119fe0c55 00:29:01.284 12:31:10 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 65f5c800-8d08-4229-b5ae-86f119fe0c55 00:29:01.543 12:31:10 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=3739624d-349c-4dfb-bec5-9f66ec5bd517 00:29:01.543 12:31:10 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3739624d-349c-4dfb-bec5-9f66ec5bd517 00:29:01.543 12:31:10 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:29:01.543 12:31:10 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:01.543 12:31:10 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=3739624d-349c-4dfb-bec5-9f66ec5bd517 00:29:01.543 12:31:10 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:29:01.543 12:31:10 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 3739624d-349c-4dfb-bec5-9f66ec5bd517 00:29:01.543 12:31:10 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=3739624d-349c-4dfb-bec5-9f66ec5bd517 00:29:01.543 12:31:10 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:01.543 12:31:10 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:29:01.543 12:31:10 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:29:01.543 12:31:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3739624d-349c-4dfb-bec5-9f66ec5bd517 00:29:01.543 12:31:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:01.543 { 00:29:01.543 "name": "3739624d-349c-4dfb-bec5-9f66ec5bd517", 00:29:01.543 "aliases": [ 00:29:01.543 "lvs/nvme0n1p0" 00:29:01.543 ], 00:29:01.543 "product_name": "Logical Volume", 00:29:01.543 "block_size": 4096, 00:29:01.543 "num_blocks": 26476544, 00:29:01.543 "uuid": "3739624d-349c-4dfb-bec5-9f66ec5bd517", 00:29:01.543 "assigned_rate_limits": { 00:29:01.543 "rw_ios_per_sec": 0, 00:29:01.543 "rw_mbytes_per_sec": 0, 00:29:01.543 "r_mbytes_per_sec": 0, 00:29:01.543 "w_mbytes_per_sec": 0 00:29:01.543 }, 00:29:01.543 "claimed": false, 00:29:01.543 "zoned": false, 00:29:01.543 "supported_io_types": { 00:29:01.543 "read": true, 00:29:01.543 "write": true, 00:29:01.543 "unmap": true, 00:29:01.543 "flush": false, 00:29:01.543 "reset": true, 00:29:01.543 "nvme_admin": false, 00:29:01.543 "nvme_io": false, 00:29:01.543 "nvme_io_md": false, 00:29:01.543 "write_zeroes": true, 00:29:01.543 "zcopy": false, 00:29:01.543 "get_zone_info": false, 00:29:01.543 "zone_management": false, 00:29:01.543 "zone_append": false, 00:29:01.543 "compare": false, 00:29:01.543 "compare_and_write": false, 00:29:01.543 "abort": false, 00:29:01.543 "seek_hole": true, 00:29:01.543 "seek_data": true, 00:29:01.543 "copy": false, 00:29:01.543 "nvme_iov_md": false 00:29:01.543 }, 00:29:01.543 "driver_specific": { 00:29:01.543 "lvol": { 00:29:01.543 "lvol_store_uuid": "65f5c800-8d08-4229-b5ae-86f119fe0c55", 00:29:01.543 "base_bdev": "nvme0n1", 00:29:01.543 "thin_provision": true, 00:29:01.543 "num_allocated_clusters": 0, 00:29:01.543 "snapshot": false, 00:29:01.543 "clone": false, 00:29:01.543 "esnap_clone": false 00:29:01.543 } 00:29:01.543 } 00:29:01.543 } 00:29:01.543 ]' 00:29:01.543 12:31:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:01.802 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:29:01.802 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:01.802 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:01.802 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:01.802 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:29:01.802 12:31:11 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:29:01.802 12:31:11 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:29:01.802 12:31:11 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:02.061 12:31:11 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:02.061 12:31:11 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:02.061 12:31:11 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 3739624d-349c-4dfb-bec5-9f66ec5bd517 00:29:02.061 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=3739624d-349c-4dfb-bec5-9f66ec5bd517 00:29:02.061 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:02.061 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:29:02.061 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:29:02.061 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3739624d-349c-4dfb-bec5-9f66ec5bd517 00:29:02.061 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:02.061 { 00:29:02.061 "name": "3739624d-349c-4dfb-bec5-9f66ec5bd517", 00:29:02.061 "aliases": [ 00:29:02.061 "lvs/nvme0n1p0" 00:29:02.061 ], 00:29:02.061 "product_name": "Logical Volume", 00:29:02.061 "block_size": 4096, 00:29:02.061 "num_blocks": 26476544, 00:29:02.061 "uuid": "3739624d-349c-4dfb-bec5-9f66ec5bd517", 00:29:02.061 "assigned_rate_limits": { 00:29:02.061 "rw_ios_per_sec": 0, 00:29:02.061 "rw_mbytes_per_sec": 0, 00:29:02.061 "r_mbytes_per_sec": 0, 00:29:02.061 "w_mbytes_per_sec": 0 00:29:02.061 }, 00:29:02.061 "claimed": false, 00:29:02.061 "zoned": false, 00:29:02.061 "supported_io_types": { 00:29:02.061 "read": true, 00:29:02.061 "write": true, 00:29:02.061 "unmap": true, 00:29:02.061 "flush": false, 00:29:02.061 "reset": true, 00:29:02.061 "nvme_admin": false, 00:29:02.061 "nvme_io": false, 00:29:02.061 "nvme_io_md": false, 00:29:02.061 "write_zeroes": true, 00:29:02.061 "zcopy": false, 00:29:02.061 "get_zone_info": false, 00:29:02.061 "zone_management": false, 00:29:02.061 "zone_append": false, 00:29:02.061 "compare": false, 00:29:02.061 "compare_and_write": false, 00:29:02.061 "abort": false, 00:29:02.061 "seek_hole": true, 00:29:02.061 "seek_data": true, 00:29:02.061 "copy": false, 00:29:02.061 "nvme_iov_md": false 00:29:02.061 }, 00:29:02.061 "driver_specific": { 00:29:02.061 "lvol": { 00:29:02.061 "lvol_store_uuid": "65f5c800-8d08-4229-b5ae-86f119fe0c55", 00:29:02.061 "base_bdev": "nvme0n1", 00:29:02.061 "thin_provision": true, 00:29:02.061 "num_allocated_clusters": 0, 00:29:02.061 "snapshot": false, 00:29:02.061 "clone": false, 00:29:02.061 "esnap_clone": false 00:29:02.061 } 00:29:02.061 } 00:29:02.061 } 00:29:02.061 ]' 00:29:02.061 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:02.320 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:29:02.320 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:02.320 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:02.320 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:02.320 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:29:02.320 12:31:11 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:29:02.320 12:31:11 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:02.320 12:31:11 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:29:02.320 12:31:11 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:29:02.320 12:31:11 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 3739624d-349c-4dfb-bec5-9f66ec5bd517 00:29:02.320 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=3739624d-349c-4dfb-bec5-9f66ec5bd517 00:29:02.320 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:02.320 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:29:02.320 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:29:02.320 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3739624d-349c-4dfb-bec5-9f66ec5bd517 00:29:02.578 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:02.578 { 00:29:02.578 "name": "3739624d-349c-4dfb-bec5-9f66ec5bd517", 00:29:02.578 "aliases": [ 00:29:02.578 "lvs/nvme0n1p0" 00:29:02.578 ], 00:29:02.578 "product_name": "Logical Volume", 00:29:02.578 "block_size": 4096, 00:29:02.578 "num_blocks": 26476544, 00:29:02.578 "uuid": "3739624d-349c-4dfb-bec5-9f66ec5bd517", 00:29:02.578 "assigned_rate_limits": { 00:29:02.578 "rw_ios_per_sec": 0, 00:29:02.578 "rw_mbytes_per_sec": 0, 00:29:02.578 "r_mbytes_per_sec": 0, 00:29:02.578 "w_mbytes_per_sec": 0 00:29:02.578 }, 00:29:02.578 "claimed": false, 00:29:02.578 "zoned": false, 00:29:02.579 "supported_io_types": { 00:29:02.579 "read": true, 00:29:02.579 "write": true, 00:29:02.579 "unmap": true, 00:29:02.579 "flush": false, 00:29:02.579 "reset": true, 00:29:02.579 "nvme_admin": false, 00:29:02.579 "nvme_io": false, 00:29:02.579 "nvme_io_md": false, 00:29:02.579 "write_zeroes": true, 00:29:02.579 "zcopy": false, 00:29:02.579 "get_zone_info": false, 00:29:02.579 "zone_management": false, 00:29:02.579 "zone_append": false, 00:29:02.579 "compare": false, 00:29:02.579 "compare_and_write": false, 00:29:02.579 "abort": false, 00:29:02.579 "seek_hole": true, 00:29:02.579 "seek_data": true, 00:29:02.579 "copy": false, 00:29:02.579 "nvme_iov_md": false 00:29:02.579 }, 00:29:02.579 "driver_specific": { 00:29:02.579 "lvol": { 00:29:02.579 "lvol_store_uuid": "65f5c800-8d08-4229-b5ae-86f119fe0c55", 00:29:02.579 "base_bdev": "nvme0n1", 00:29:02.579 "thin_provision": true, 00:29:02.579 "num_allocated_clusters": 0, 00:29:02.579 "snapshot": false, 00:29:02.579 "clone": false, 00:29:02.579 "esnap_clone": false 00:29:02.579 } 00:29:02.579 } 00:29:02.579 } 00:29:02.579 ]' 00:29:02.579 12:31:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:02.579 12:31:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:29:02.579 12:31:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:02.838 12:31:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:02.838 12:31:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:02.838 12:31:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:29:02.838 12:31:12 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:29:02.838 12:31:12 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3739624d-349c-4dfb-bec5-9f66ec5bd517 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:29:02.838 [2024-07-10 12:31:12.245412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.838 [2024-07-10 12:31:12.245470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:02.838 [2024-07-10 12:31:12.245489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:02.838 [2024-07-10 12:31:12.245504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.838 [2024-07-10 12:31:12.248879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.838 [2024-07-10 12:31:12.248921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:02.838 [2024-07-10 12:31:12.248934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.349 ms 00:29:02.838 [2024-07-10 12:31:12.248946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.838 [2024-07-10 12:31:12.249090] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:02.838 [2024-07-10 12:31:12.250209] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:02.838 [2024-07-10 12:31:12.250235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.838 [2024-07-10 12:31:12.250251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:02.838 [2024-07-10 12:31:12.250262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:29:02.838 [2024-07-10 12:31:12.250274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.838 [2024-07-10 12:31:12.250526] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID be5c4b66-7920-4ed3-b7e3-f52bd4043dbc 00:29:02.838 [2024-07-10 12:31:12.252012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.838 [2024-07-10 12:31:12.252046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:02.838 [2024-07-10 12:31:12.252071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:02.838 [2024-07-10 12:31:12.252082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.838 [2024-07-10 12:31:12.261544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.838 [2024-07-10 12:31:12.261575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:02.838 [2024-07-10 12:31:12.261591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.394 ms 00:29:02.838 [2024-07-10 12:31:12.261602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.838 [2024-07-10 12:31:12.261768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.838 [2024-07-10 12:31:12.261784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:02.838 [2024-07-10 12:31:12.261798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:29:02.838 [2024-07-10 12:31:12.261809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.838 [2024-07-10 12:31:12.261865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.838 [2024-07-10 12:31:12.261875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:02.838 [2024-07-10 12:31:12.261891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:02.838 [2024-07-10 12:31:12.261901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.838 [2024-07-10 12:31:12.261941] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:02.838 [2024-07-10 12:31:12.267472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.838 [2024-07-10 12:31:12.267510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:02.838 [2024-07-10 12:31:12.267522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.550 ms 00:29:02.838 [2024-07-10 12:31:12.267535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.838 [2024-07-10 12:31:12.267610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.838 [2024-07-10 12:31:12.267625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:02.838 [2024-07-10 12:31:12.267636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:02.838 [2024-07-10 12:31:12.267649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.838 [2024-07-10 12:31:12.267684] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:02.839 [2024-07-10 12:31:12.267844] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:02.839 [2024-07-10 12:31:12.267860] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:02.839 [2024-07-10 12:31:12.267880] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:02.839 [2024-07-10 12:31:12.267893] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:02.839 [2024-07-10 12:31:12.267908] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:02.839 [2024-07-10 12:31:12.267920] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:02.839 [2024-07-10 12:31:12.267933] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:02.839 [2024-07-10 12:31:12.267948] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:02.839 [2024-07-10 12:31:12.267978] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:02.839 [2024-07-10 12:31:12.267989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.839 [2024-07-10 12:31:12.268002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:02.839 [2024-07-10 12:31:12.268013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:29:02.839 [2024-07-10 12:31:12.268027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.839 [2024-07-10 12:31:12.268117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.839 [2024-07-10 12:31:12.268130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:02.839 [2024-07-10 12:31:12.268140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:02.839 [2024-07-10 12:31:12.268153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.839 [2024-07-10 12:31:12.268270] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:02.839 [2024-07-10 12:31:12.268293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:02.839 [2024-07-10 12:31:12.268304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:02.839 [2024-07-10 12:31:12.268316] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.839 [2024-07-10 12:31:12.268327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:02.839 [2024-07-10 12:31:12.268339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:02.839 [2024-07-10 12:31:12.268348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:02.839 [2024-07-10 12:31:12.268360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:02.839 [2024-07-10 12:31:12.268369] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:02.839 [2024-07-10 12:31:12.268381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:02.839 [2024-07-10 12:31:12.268390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:02.839 [2024-07-10 12:31:12.268403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:02.839 [2024-07-10 12:31:12.268412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:02.839 [2024-07-10 12:31:12.268425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:02.839 [2024-07-10 12:31:12.268435] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:02.839 [2024-07-10 12:31:12.268447] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.839 [2024-07-10 12:31:12.268456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:02.839 [2024-07-10 12:31:12.268471] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:02.839 [2024-07-10 12:31:12.268481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.839 [2024-07-10 12:31:12.268492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:02.839 [2024-07-10 12:31:12.268502] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:02.839 [2024-07-10 12:31:12.268514] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.839 [2024-07-10 12:31:12.268523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:02.839 [2024-07-10 12:31:12.268535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:02.839 [2024-07-10 12:31:12.268544] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.839 [2024-07-10 12:31:12.268556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:02.839 [2024-07-10 12:31:12.268565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:02.839 [2024-07-10 12:31:12.268577] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.839 [2024-07-10 12:31:12.268586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:02.839 [2024-07-10 12:31:12.268598] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:02.839 [2024-07-10 12:31:12.268607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.839 [2024-07-10 12:31:12.268618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:02.839 [2024-07-10 12:31:12.268628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:02.839 [2024-07-10 12:31:12.268642] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:02.839 [2024-07-10 12:31:12.268651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:02.839 [2024-07-10 12:31:12.268663] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:02.839 [2024-07-10 12:31:12.268672] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:02.839 [2024-07-10 12:31:12.268684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:02.839 [2024-07-10 12:31:12.268694] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:02.839 [2024-07-10 12:31:12.268707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.839 [2024-07-10 12:31:12.268716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:02.839 [2024-07-10 12:31:12.268737] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:02.839 [2024-07-10 12:31:12.268748] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.839 [2024-07-10 12:31:12.268759] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:02.839 [2024-07-10 12:31:12.268770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:02.839 [2024-07-10 12:31:12.268786] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:02.839 [2024-07-10 12:31:12.268795] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.839 [2024-07-10 12:31:12.268808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:02.839 [2024-07-10 12:31:12.268817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:02.839 [2024-07-10 12:31:12.268832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:02.839 [2024-07-10 12:31:12.268842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:02.839 [2024-07-10 12:31:12.268854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:02.839 [2024-07-10 12:31:12.268863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:02.839 [2024-07-10 12:31:12.268879] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:02.839 [2024-07-10 12:31:12.268894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.839 [2024-07-10 12:31:12.268907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:02.839 [2024-07-10 12:31:12.268918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:02.839 [2024-07-10 12:31:12.268931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:02.839 [2024-07-10 12:31:12.268942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:02.839 [2024-07-10 12:31:12.268955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:02.839 [2024-07-10 12:31:12.268966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:02.839 [2024-07-10 12:31:12.268980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:02.839 [2024-07-10 12:31:12.268990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:02.839 [2024-07-10 12:31:12.269005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:02.839 [2024-07-10 12:31:12.269015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:02.839 [2024-07-10 12:31:12.269030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:02.839 [2024-07-10 12:31:12.269040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:02.839 [2024-07-10 12:31:12.269054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:02.839 [2024-07-10 12:31:12.269065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:02.839 [2024-07-10 12:31:12.269078] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:02.839 [2024-07-10 12:31:12.269089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.839 [2024-07-10 12:31:12.269103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:02.839 [2024-07-10 12:31:12.269114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:02.839 [2024-07-10 12:31:12.269127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:02.839 [2024-07-10 12:31:12.269138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:02.839 [2024-07-10 12:31:12.269151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.839 [2024-07-10 12:31:12.269161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:02.839 [2024-07-10 12:31:12.269177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:29:02.839 [2024-07-10 12:31:12.269186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.839 [2024-07-10 12:31:12.269278] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:02.839 [2024-07-10 12:31:12.269290] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:06.127 [2024-07-10 12:31:15.363603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.127 [2024-07-10 12:31:15.363671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:06.127 [2024-07-10 12:31:15.363694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3099.340 ms 00:29:06.127 [2024-07-10 12:31:15.363706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.127 [2024-07-10 12:31:15.408000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.127 [2024-07-10 12:31:15.408311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:06.127 [2024-07-10 12:31:15.408432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.980 ms 00:29:06.127 [2024-07-10 12:31:15.408470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.127 [2024-07-10 12:31:15.408684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.127 [2024-07-10 12:31:15.408839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:06.127 [2024-07-10 12:31:15.408858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:06.127 [2024-07-10 12:31:15.408873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.127 [2024-07-10 12:31:15.471240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.127 [2024-07-10 12:31:15.471313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:06.127 [2024-07-10 12:31:15.471337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.413 ms 00:29:06.127 [2024-07-10 12:31:15.471351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.127 [2024-07-10 12:31:15.471500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.127 [2024-07-10 12:31:15.471517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:06.127 [2024-07-10 12:31:15.471535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:06.127 [2024-07-10 12:31:15.471548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.127 [2024-07-10 12:31:15.472392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.127 [2024-07-10 12:31:15.472411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:06.127 [2024-07-10 12:31:15.472428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:29:06.127 [2024-07-10 12:31:15.472441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.127 [2024-07-10 12:31:15.472592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.127 [2024-07-10 12:31:15.472607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:06.127 [2024-07-10 12:31:15.472623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:29:06.127 [2024-07-10 12:31:15.472636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.127 [2024-07-10 12:31:15.501616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.127 [2024-07-10 12:31:15.501681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:06.127 [2024-07-10 12:31:15.501701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.976 ms 00:29:06.127 [2024-07-10 12:31:15.501712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.127 [2024-07-10 12:31:15.518982] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:06.127 [2024-07-10 12:31:15.546667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.127 [2024-07-10 12:31:15.546757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:06.127 [2024-07-10 12:31:15.546775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.848 ms 00:29:06.127 [2024-07-10 12:31:15.546789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.398 [2024-07-10 12:31:15.646986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.398 [2024-07-10 12:31:15.647067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:06.398 [2024-07-10 12:31:15.647087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.227 ms 00:29:06.398 [2024-07-10 12:31:15.647101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.399 [2024-07-10 12:31:15.647349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.399 [2024-07-10 12:31:15.647367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:06.399 [2024-07-10 12:31:15.647380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:29:06.399 [2024-07-10 12:31:15.647398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.399 [2024-07-10 12:31:15.690164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.399 [2024-07-10 12:31:15.690240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:06.399 [2024-07-10 12:31:15.690259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.795 ms 00:29:06.399 [2024-07-10 12:31:15.690274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.399 [2024-07-10 12:31:15.735155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.399 [2024-07-10 12:31:15.735232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:06.399 [2024-07-10 12:31:15.735251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.846 ms 00:29:06.399 [2024-07-10 12:31:15.735265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.399 [2024-07-10 12:31:15.736231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.399 [2024-07-10 12:31:15.736262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:06.399 [2024-07-10 12:31:15.736275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.858 ms 00:29:06.399 [2024-07-10 12:31:15.736288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.399 [2024-07-10 12:31:15.862575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.399 [2024-07-10 12:31:15.862662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:06.399 [2024-07-10 12:31:15.862681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 126.444 ms 00:29:06.399 [2024-07-10 12:31:15.862699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.679 [2024-07-10 12:31:15.906780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.679 [2024-07-10 12:31:15.906842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:06.679 [2024-07-10 12:31:15.906860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.996 ms 00:29:06.679 [2024-07-10 12:31:15.906878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.679 [2024-07-10 12:31:15.950273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.679 [2024-07-10 12:31:15.950351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:06.679 [2024-07-10 12:31:15.950369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.329 ms 00:29:06.679 [2024-07-10 12:31:15.950382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.679 [2024-07-10 12:31:15.993117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.679 [2024-07-10 12:31:15.993192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:06.679 [2024-07-10 12:31:15.993210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.665 ms 00:29:06.679 [2024-07-10 12:31:15.993223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.679 [2024-07-10 12:31:15.993359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.679 [2024-07-10 12:31:15.993376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:06.679 [2024-07-10 12:31:15.993388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:06.679 [2024-07-10 12:31:15.993406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.679 [2024-07-10 12:31:15.993498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.679 [2024-07-10 12:31:15.993513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:06.679 [2024-07-10 12:31:15.993525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:06.679 [2024-07-10 12:31:15.993558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.679 [2024-07-10 12:31:15.994887] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:06.679 [2024-07-10 12:31:16.001382] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3755.225 ms, result 0 00:29:06.679 [2024-07-10 12:31:16.002440] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:06.679 { 00:29:06.679 "name": "ftl0", 00:29:06.679 "uuid": "be5c4b66-7920-4ed3-b7e3-f52bd4043dbc" 00:29:06.679 } 00:29:06.679 12:31:16 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:29:06.679 12:31:16 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:29:06.679 12:31:16 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:29:06.679 12:31:16 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:29:06.679 12:31:16 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:29:06.679 12:31:16 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:29:06.679 12:31:16 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:06.937 12:31:16 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:29:06.937 [ 00:29:06.937 { 00:29:06.937 "name": "ftl0", 00:29:06.937 "aliases": [ 00:29:06.937 "be5c4b66-7920-4ed3-b7e3-f52bd4043dbc" 00:29:06.937 ], 00:29:06.937 "product_name": "FTL disk", 00:29:06.937 "block_size": 4096, 00:29:06.937 "num_blocks": 23592960, 00:29:06.937 "uuid": "be5c4b66-7920-4ed3-b7e3-f52bd4043dbc", 00:29:06.937 "assigned_rate_limits": { 00:29:06.937 "rw_ios_per_sec": 0, 00:29:06.937 "rw_mbytes_per_sec": 0, 00:29:06.937 "r_mbytes_per_sec": 0, 00:29:06.937 "w_mbytes_per_sec": 0 00:29:06.937 }, 00:29:06.937 "claimed": false, 00:29:06.937 "zoned": false, 00:29:06.937 "supported_io_types": { 00:29:06.937 "read": true, 00:29:06.937 "write": true, 00:29:06.937 "unmap": true, 00:29:06.937 "flush": true, 00:29:06.937 "reset": false, 00:29:06.937 "nvme_admin": false, 00:29:06.937 "nvme_io": false, 00:29:06.937 "nvme_io_md": false, 00:29:06.937 "write_zeroes": true, 00:29:06.937 "zcopy": false, 00:29:06.937 "get_zone_info": false, 00:29:06.937 "zone_management": false, 00:29:06.938 "zone_append": false, 00:29:06.938 "compare": false, 00:29:06.938 "compare_and_write": false, 00:29:06.938 "abort": false, 00:29:06.938 "seek_hole": false, 00:29:06.938 "seek_data": false, 00:29:06.938 "copy": false, 00:29:06.938 "nvme_iov_md": false 00:29:06.938 }, 00:29:06.938 "driver_specific": { 00:29:06.938 "ftl": { 00:29:06.938 "base_bdev": "3739624d-349c-4dfb-bec5-9f66ec5bd517", 00:29:06.938 "cache": "nvc0n1p0" 00:29:06.938 } 00:29:06.938 } 00:29:06.938 } 00:29:06.938 ] 00:29:06.938 12:31:16 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:29:06.938 12:31:16 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:29:06.938 12:31:16 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:07.197 12:31:16 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:29:07.197 12:31:16 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:29:07.457 12:31:16 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:29:07.457 { 00:29:07.457 "name": "ftl0", 00:29:07.457 "aliases": [ 00:29:07.457 "be5c4b66-7920-4ed3-b7e3-f52bd4043dbc" 00:29:07.457 ], 00:29:07.457 "product_name": "FTL disk", 00:29:07.457 "block_size": 4096, 00:29:07.457 "num_blocks": 23592960, 00:29:07.457 "uuid": "be5c4b66-7920-4ed3-b7e3-f52bd4043dbc", 00:29:07.457 "assigned_rate_limits": { 00:29:07.457 "rw_ios_per_sec": 0, 00:29:07.457 "rw_mbytes_per_sec": 0, 00:29:07.457 "r_mbytes_per_sec": 0, 00:29:07.457 "w_mbytes_per_sec": 0 00:29:07.457 }, 00:29:07.457 "claimed": false, 00:29:07.457 "zoned": false, 00:29:07.457 "supported_io_types": { 00:29:07.457 "read": true, 00:29:07.457 "write": true, 00:29:07.457 "unmap": true, 00:29:07.457 "flush": true, 00:29:07.457 "reset": false, 00:29:07.457 "nvme_admin": false, 00:29:07.457 "nvme_io": false, 00:29:07.457 "nvme_io_md": false, 00:29:07.457 "write_zeroes": true, 00:29:07.457 "zcopy": false, 00:29:07.457 "get_zone_info": false, 00:29:07.457 "zone_management": false, 00:29:07.457 "zone_append": false, 00:29:07.457 "compare": false, 00:29:07.457 "compare_and_write": false, 00:29:07.457 "abort": false, 00:29:07.457 "seek_hole": false, 00:29:07.457 "seek_data": false, 00:29:07.457 "copy": false, 00:29:07.457 "nvme_iov_md": false 00:29:07.457 }, 00:29:07.457 "driver_specific": { 00:29:07.457 "ftl": { 00:29:07.457 "base_bdev": "3739624d-349c-4dfb-bec5-9f66ec5bd517", 00:29:07.457 "cache": "nvc0n1p0" 00:29:07.457 } 00:29:07.457 } 00:29:07.457 } 00:29:07.457 ]' 00:29:07.457 12:31:16 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:29:07.457 12:31:16 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:29:07.457 12:31:16 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:07.716 [2024-07-10 12:31:17.006903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.716 [2024-07-10 12:31:17.006962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:07.716 [2024-07-10 12:31:17.006985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:07.716 [2024-07-10 12:31:17.006996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.716 [2024-07-10 12:31:17.007058] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:07.716 [2024-07-10 12:31:17.011250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.716 [2024-07-10 12:31:17.011289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:07.716 [2024-07-10 12:31:17.011303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.178 ms 00:29:07.716 [2024-07-10 12:31:17.011322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.716 [2024-07-10 12:31:17.011873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.716 [2024-07-10 12:31:17.011896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:07.716 [2024-07-10 12:31:17.011908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:29:07.716 [2024-07-10 12:31:17.011921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.716 [2024-07-10 12:31:17.014754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.716 [2024-07-10 12:31:17.014780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:07.716 [2024-07-10 12:31:17.014791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.803 ms 00:29:07.716 [2024-07-10 12:31:17.014804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.716 [2024-07-10 12:31:17.020410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.716 [2024-07-10 12:31:17.020445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:07.716 [2024-07-10 12:31:17.020458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.579 ms 00:29:07.716 [2024-07-10 12:31:17.020470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.716 [2024-07-10 12:31:17.062580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.716 [2024-07-10 12:31:17.062626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:07.716 [2024-07-10 12:31:17.062642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.076 ms 00:29:07.716 [2024-07-10 12:31:17.062660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.716 [2024-07-10 12:31:17.085043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.716 [2024-07-10 12:31:17.085087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:07.716 [2024-07-10 12:31:17.085105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.315 ms 00:29:07.716 [2024-07-10 12:31:17.085118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.716 [2024-07-10 12:31:17.085345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.716 [2024-07-10 12:31:17.085361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:07.716 [2024-07-10 12:31:17.085373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:29:07.716 [2024-07-10 12:31:17.085386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.716 [2024-07-10 12:31:17.123658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.716 [2024-07-10 12:31:17.123700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:07.716 [2024-07-10 12:31:17.123714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.302 ms 00:29:07.716 [2024-07-10 12:31:17.123738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.716 [2024-07-10 12:31:17.162888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.716 [2024-07-10 12:31:17.162956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:07.716 [2024-07-10 12:31:17.162974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.124 ms 00:29:07.716 [2024-07-10 12:31:17.162991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.977 [2024-07-10 12:31:17.201379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.977 [2024-07-10 12:31:17.201429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:07.977 [2024-07-10 12:31:17.201444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.343 ms 00:29:07.977 [2024-07-10 12:31:17.201458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.977 [2024-07-10 12:31:17.238813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.977 [2024-07-10 12:31:17.238859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:07.977 [2024-07-10 12:31:17.238873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.273 ms 00:29:07.977 [2024-07-10 12:31:17.238886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.977 [2024-07-10 12:31:17.238969] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:07.977 [2024-07-10 12:31:17.238993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:07.977 [2024-07-10 12:31:17.239351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.239998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:07.978 [2024-07-10 12:31:17.240297] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:07.978 [2024-07-10 12:31:17.240312] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: be5c4b66-7920-4ed3-b7e3-f52bd4043dbc 00:29:07.978 [2024-07-10 12:31:17.240329] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:07.978 [2024-07-10 12:31:17.240338] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:07.978 [2024-07-10 12:31:17.240354] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:07.978 [2024-07-10 12:31:17.240364] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:07.978 [2024-07-10 12:31:17.240377] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:07.978 [2024-07-10 12:31:17.240388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:07.978 [2024-07-10 12:31:17.240400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:07.978 [2024-07-10 12:31:17.240409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:07.978 [2024-07-10 12:31:17.240420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:07.978 [2024-07-10 12:31:17.240430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.978 [2024-07-10 12:31:17.240443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:07.978 [2024-07-10 12:31:17.240455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.466 ms 00:29:07.978 [2024-07-10 12:31:17.240467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.978 [2024-07-10 12:31:17.261721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.978 [2024-07-10 12:31:17.261782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:07.978 [2024-07-10 12:31:17.261812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.250 ms 00:29:07.978 [2024-07-10 12:31:17.261829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.978 [2024-07-10 12:31:17.262470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:07.978 [2024-07-10 12:31:17.262493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:07.978 [2024-07-10 12:31:17.262504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:29:07.978 [2024-07-10 12:31:17.262516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.978 [2024-07-10 12:31:17.340132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.978 [2024-07-10 12:31:17.340189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:07.978 [2024-07-10 12:31:17.340204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.978 [2024-07-10 12:31:17.340217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.978 [2024-07-10 12:31:17.340344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.978 [2024-07-10 12:31:17.340359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:07.978 [2024-07-10 12:31:17.340371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.978 [2024-07-10 12:31:17.340383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.978 [2024-07-10 12:31:17.340460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.978 [2024-07-10 12:31:17.340480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:07.978 [2024-07-10 12:31:17.340491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.978 [2024-07-10 12:31:17.340506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:07.978 [2024-07-10 12:31:17.340539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:07.978 [2024-07-10 12:31:17.340552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:07.978 [2024-07-10 12:31:17.340562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:07.978 [2024-07-10 12:31:17.340575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.238 [2024-07-10 12:31:17.476958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:08.238 [2024-07-10 12:31:17.477027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:08.238 [2024-07-10 12:31:17.477043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:08.238 [2024-07-10 12:31:17.477057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.238 [2024-07-10 12:31:17.580747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:08.238 [2024-07-10 12:31:17.580819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:08.238 [2024-07-10 12:31:17.580836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:08.238 [2024-07-10 12:31:17.580850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.238 [2024-07-10 12:31:17.580969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:08.238 [2024-07-10 12:31:17.580984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:08.238 [2024-07-10 12:31:17.580998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:08.238 [2024-07-10 12:31:17.581015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.238 [2024-07-10 12:31:17.581074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:08.238 [2024-07-10 12:31:17.581088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:08.238 [2024-07-10 12:31:17.581099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:08.238 [2024-07-10 12:31:17.581111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.238 [2024-07-10 12:31:17.581259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:08.238 [2024-07-10 12:31:17.581275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:08.238 [2024-07-10 12:31:17.581302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:08.238 [2024-07-10 12:31:17.581318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.238 [2024-07-10 12:31:17.581375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:08.238 [2024-07-10 12:31:17.581403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:08.238 [2024-07-10 12:31:17.581413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:08.238 [2024-07-10 12:31:17.581426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.238 [2024-07-10 12:31:17.581481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:08.238 [2024-07-10 12:31:17.581495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:08.238 [2024-07-10 12:31:17.581506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:08.238 [2024-07-10 12:31:17.581524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.238 [2024-07-10 12:31:17.581583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:08.238 [2024-07-10 12:31:17.581597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:08.238 [2024-07-10 12:31:17.581609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:08.238 [2024-07-10 12:31:17.581620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.238 [2024-07-10 12:31:17.581829] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 575.847 ms, result 0 00:29:08.238 true 00:29:08.238 12:31:17 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 81008 00:29:08.238 12:31:17 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81008 ']' 00:29:08.238 12:31:17 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81008 00:29:08.238 12:31:17 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:29:08.238 12:31:17 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:08.238 12:31:17 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81008 00:29:08.238 killing process with pid 81008 00:29:08.238 12:31:17 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:08.238 12:31:17 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:08.238 12:31:17 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81008' 00:29:08.238 12:31:17 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81008 00:29:08.238 12:31:17 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81008 00:29:13.508 12:31:22 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:29:14.076 65536+0 records in 00:29:14.076 65536+0 records out 00:29:14.076 268435456 bytes (268 MB, 256 MiB) copied, 0.97517 s, 275 MB/s 00:29:14.076 12:31:23 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:14.334 [2024-07-10 12:31:23.607489] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:29:14.334 [2024-07-10 12:31:23.607620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81214 ] 00:29:14.334 [2024-07-10 12:31:23.777549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.592 [2024-07-10 12:31:24.015323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.161 [2024-07-10 12:31:24.413195] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:15.161 [2024-07-10 12:31:24.413285] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:15.161 [2024-07-10 12:31:24.576393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.161 [2024-07-10 12:31:24.576445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:15.161 [2024-07-10 12:31:24.576462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:15.161 [2024-07-10 12:31:24.576472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.161 [2024-07-10 12:31:24.579605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.161 [2024-07-10 12:31:24.579645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:15.161 [2024-07-10 12:31:24.579658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.117 ms 00:29:15.161 [2024-07-10 12:31:24.579668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.161 [2024-07-10 12:31:24.579788] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:15.161 [2024-07-10 12:31:24.580875] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:15.161 [2024-07-10 12:31:24.580910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.161 [2024-07-10 12:31:24.580921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:15.161 [2024-07-10 12:31:24.580933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.131 ms 00:29:15.161 [2024-07-10 12:31:24.580943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.161 [2024-07-10 12:31:24.582408] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:15.161 [2024-07-10 12:31:24.602780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.161 [2024-07-10 12:31:24.602818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:15.161 [2024-07-10 12:31:24.602838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.404 ms 00:29:15.161 [2024-07-10 12:31:24.602849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.161 [2024-07-10 12:31:24.602949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.161 [2024-07-10 12:31:24.602963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:15.161 [2024-07-10 12:31:24.602975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:15.161 [2024-07-10 12:31:24.602986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.161 [2024-07-10 12:31:24.610410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.161 [2024-07-10 12:31:24.610442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:15.161 [2024-07-10 12:31:24.610454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.393 ms 00:29:15.161 [2024-07-10 12:31:24.610465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.161 [2024-07-10 12:31:24.610565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.161 [2024-07-10 12:31:24.610579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:15.161 [2024-07-10 12:31:24.610590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:15.161 [2024-07-10 12:31:24.610600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.161 [2024-07-10 12:31:24.610635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.161 [2024-07-10 12:31:24.610647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:15.161 [2024-07-10 12:31:24.610658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:15.161 [2024-07-10 12:31:24.610671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.161 [2024-07-10 12:31:24.610695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:15.161 [2024-07-10 12:31:24.616292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.161 [2024-07-10 12:31:24.616325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:15.161 [2024-07-10 12:31:24.616338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.612 ms 00:29:15.161 [2024-07-10 12:31:24.616348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.161 [2024-07-10 12:31:24.616417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.161 [2024-07-10 12:31:24.616430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:15.161 [2024-07-10 12:31:24.616441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:15.161 [2024-07-10 12:31:24.616451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.161 [2024-07-10 12:31:24.616472] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:15.161 [2024-07-10 12:31:24.616497] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:15.161 [2024-07-10 12:31:24.616536] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:15.161 [2024-07-10 12:31:24.616554] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:15.161 [2024-07-10 12:31:24.616648] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:15.161 [2024-07-10 12:31:24.616663] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:15.161 [2024-07-10 12:31:24.616676] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:15.161 [2024-07-10 12:31:24.616689] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:15.161 [2024-07-10 12:31:24.616702] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:15.161 [2024-07-10 12:31:24.616713] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:15.161 [2024-07-10 12:31:24.616726] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:15.161 [2024-07-10 12:31:24.616754] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:15.161 [2024-07-10 12:31:24.616764] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:15.161 [2024-07-10 12:31:24.616775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.161 [2024-07-10 12:31:24.616785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:15.161 [2024-07-10 12:31:24.616796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:29:15.161 [2024-07-10 12:31:24.616806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.161 [2024-07-10 12:31:24.616879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.161 [2024-07-10 12:31:24.616891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:15.161 [2024-07-10 12:31:24.616901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:15.161 [2024-07-10 12:31:24.616914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.161 [2024-07-10 12:31:24.616998] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:15.161 [2024-07-10 12:31:24.617011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:15.161 [2024-07-10 12:31:24.617021] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:15.161 [2024-07-10 12:31:24.617031] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:15.161 [2024-07-10 12:31:24.617041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:15.161 [2024-07-10 12:31:24.617050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:15.161 [2024-07-10 12:31:24.617059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:15.161 [2024-07-10 12:31:24.617068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:15.161 [2024-07-10 12:31:24.617078] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:15.161 [2024-07-10 12:31:24.617087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:15.162 [2024-07-10 12:31:24.617096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:15.162 [2024-07-10 12:31:24.617106] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:15.162 [2024-07-10 12:31:24.617117] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:15.162 [2024-07-10 12:31:24.617127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:15.162 [2024-07-10 12:31:24.617136] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:15.162 [2024-07-10 12:31:24.617145] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:15.162 [2024-07-10 12:31:24.617154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:15.162 [2024-07-10 12:31:24.617163] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:15.162 [2024-07-10 12:31:24.617184] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:15.162 [2024-07-10 12:31:24.617193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:15.162 [2024-07-10 12:31:24.617203] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:15.162 [2024-07-10 12:31:24.617212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:15.162 [2024-07-10 12:31:24.617220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:15.162 [2024-07-10 12:31:24.617230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:15.162 [2024-07-10 12:31:24.617239] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:15.162 [2024-07-10 12:31:24.617248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:15.162 [2024-07-10 12:31:24.617258] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:15.162 [2024-07-10 12:31:24.617267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:15.162 [2024-07-10 12:31:24.617276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:15.162 [2024-07-10 12:31:24.617285] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:15.162 [2024-07-10 12:31:24.617294] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:15.162 [2024-07-10 12:31:24.617304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:15.162 [2024-07-10 12:31:24.617313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:15.162 [2024-07-10 12:31:24.617322] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:15.162 [2024-07-10 12:31:24.617331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:15.162 [2024-07-10 12:31:24.617340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:15.162 [2024-07-10 12:31:24.617349] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:15.162 [2024-07-10 12:31:24.617358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:15.162 [2024-07-10 12:31:24.617367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:15.162 [2024-07-10 12:31:24.617376] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:15.162 [2024-07-10 12:31:24.617385] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:15.162 [2024-07-10 12:31:24.617394] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:15.162 [2024-07-10 12:31:24.617403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:15.162 [2024-07-10 12:31:24.617412] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:15.162 [2024-07-10 12:31:24.617424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:15.162 [2024-07-10 12:31:24.617434] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:15.162 [2024-07-10 12:31:24.617444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:15.162 [2024-07-10 12:31:24.617453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:15.162 [2024-07-10 12:31:24.617463] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:15.162 [2024-07-10 12:31:24.617472] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:15.162 [2024-07-10 12:31:24.617481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:15.162 [2024-07-10 12:31:24.617490] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:15.162 [2024-07-10 12:31:24.617499] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:15.162 [2024-07-10 12:31:24.617510] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:15.162 [2024-07-10 12:31:24.617526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:15.162 [2024-07-10 12:31:24.617537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:15.162 [2024-07-10 12:31:24.617549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:15.162 [2024-07-10 12:31:24.617559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:15.162 [2024-07-10 12:31:24.617570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:15.162 [2024-07-10 12:31:24.617580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:15.162 [2024-07-10 12:31:24.617590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:15.162 [2024-07-10 12:31:24.617601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:15.162 [2024-07-10 12:31:24.617612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:15.162 [2024-07-10 12:31:24.617622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:15.162 [2024-07-10 12:31:24.617633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:15.162 [2024-07-10 12:31:24.617643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:15.162 [2024-07-10 12:31:24.617653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:15.162 [2024-07-10 12:31:24.617663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:15.162 [2024-07-10 12:31:24.617674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:15.162 [2024-07-10 12:31:24.617684] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:15.162 [2024-07-10 12:31:24.617695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:15.162 [2024-07-10 12:31:24.617705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:15.162 [2024-07-10 12:31:24.617716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:15.162 [2024-07-10 12:31:24.617727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:15.162 [2024-07-10 12:31:24.617749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:15.162 [2024-07-10 12:31:24.617760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.162 [2024-07-10 12:31:24.617771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:15.162 [2024-07-10 12:31:24.617782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:29:15.162 [2024-07-10 12:31:24.617792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.422 [2024-07-10 12:31:24.673113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.422 [2024-07-10 12:31:24.673170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:15.422 [2024-07-10 12:31:24.673187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.353 ms 00:29:15.422 [2024-07-10 12:31:24.673198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.422 [2024-07-10 12:31:24.673358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.422 [2024-07-10 12:31:24.673381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:15.422 [2024-07-10 12:31:24.673393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:15.422 [2024-07-10 12:31:24.673407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.422 [2024-07-10 12:31:24.724235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.422 [2024-07-10 12:31:24.724284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:15.422 [2024-07-10 12:31:24.724299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.885 ms 00:29:15.422 [2024-07-10 12:31:24.724310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.422 [2024-07-10 12:31:24.724412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.422 [2024-07-10 12:31:24.724425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:15.422 [2024-07-10 12:31:24.724437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:15.422 [2024-07-10 12:31:24.724447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.422 [2024-07-10 12:31:24.724891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.422 [2024-07-10 12:31:24.724905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:15.422 [2024-07-10 12:31:24.724916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:29:15.422 [2024-07-10 12:31:24.724926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.422 [2024-07-10 12:31:24.725048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.422 [2024-07-10 12:31:24.725065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:15.422 [2024-07-10 12:31:24.725076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:29:15.422 [2024-07-10 12:31:24.725086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.422 [2024-07-10 12:31:24.745747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.422 [2024-07-10 12:31:24.745788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:15.422 [2024-07-10 12:31:24.745802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.670 ms 00:29:15.422 [2024-07-10 12:31:24.745828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.422 [2024-07-10 12:31:24.766607] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:15.422 [2024-07-10 12:31:24.766647] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:15.422 [2024-07-10 12:31:24.766662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.422 [2024-07-10 12:31:24.766672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:15.422 [2024-07-10 12:31:24.766683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.711 ms 00:29:15.422 [2024-07-10 12:31:24.766693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.422 [2024-07-10 12:31:24.796888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.422 [2024-07-10 12:31:24.796928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:15.422 [2024-07-10 12:31:24.796943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.133 ms 00:29:15.423 [2024-07-10 12:31:24.796953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.423 [2024-07-10 12:31:24.816742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.423 [2024-07-10 12:31:24.816785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:15.423 [2024-07-10 12:31:24.816798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.737 ms 00:29:15.423 [2024-07-10 12:31:24.816808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.423 [2024-07-10 12:31:24.836571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.423 [2024-07-10 12:31:24.836607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:15.423 [2024-07-10 12:31:24.836620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.717 ms 00:29:15.423 [2024-07-10 12:31:24.836631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.423 [2024-07-10 12:31:24.837495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.423 [2024-07-10 12:31:24.837526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:15.423 [2024-07-10 12:31:24.837543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:29:15.423 [2024-07-10 12:31:24.837553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.682 [2024-07-10 12:31:24.927714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.682 [2024-07-10 12:31:24.927784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:15.682 [2024-07-10 12:31:24.927809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.276 ms 00:29:15.682 [2024-07-10 12:31:24.927821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.682 [2024-07-10 12:31:24.940520] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:15.682 [2024-07-10 12:31:24.956911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.682 [2024-07-10 12:31:24.956974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:15.682 [2024-07-10 12:31:24.956993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.019 ms 00:29:15.682 [2024-07-10 12:31:24.957003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.682 [2024-07-10 12:31:24.957123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.682 [2024-07-10 12:31:24.957138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:15.682 [2024-07-10 12:31:24.957151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:15.682 [2024-07-10 12:31:24.957165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.682 [2024-07-10 12:31:24.957225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.682 [2024-07-10 12:31:24.957236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:15.682 [2024-07-10 12:31:24.957248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:15.682 [2024-07-10 12:31:24.957258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.682 [2024-07-10 12:31:24.957282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.682 [2024-07-10 12:31:24.957293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:15.682 [2024-07-10 12:31:24.957303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:15.682 [2024-07-10 12:31:24.957313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.682 [2024-07-10 12:31:24.957353] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:15.682 [2024-07-10 12:31:24.957365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.682 [2024-07-10 12:31:24.957375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:15.682 [2024-07-10 12:31:24.957386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:15.682 [2024-07-10 12:31:24.957396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.682 [2024-07-10 12:31:24.994811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.682 [2024-07-10 12:31:24.994856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:15.682 [2024-07-10 12:31:24.994871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.452 ms 00:29:15.682 [2024-07-10 12:31:24.994889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.682 [2024-07-10 12:31:24.995005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.682 [2024-07-10 12:31:24.995019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:15.682 [2024-07-10 12:31:24.995031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:15.682 [2024-07-10 12:31:24.995042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.682 [2024-07-10 12:31:24.995932] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:15.682 [2024-07-10 12:31:25.001123] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 419.904 ms, result 0 00:29:15.682 [2024-07-10 12:31:25.002063] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:15.682 [2024-07-10 12:31:25.020232] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:25.471  Copying: 25/256 [MB] (25 MBps) Copying: 52/256 [MB] (26 MBps) Copying: 78/256 [MB] (26 MBps) Copying: 104/256 [MB] (25 MBps) Copying: 129/256 [MB] (25 MBps) Copying: 155/256 [MB] (25 MBps) Copying: 181/256 [MB] (25 MBps) Copying: 207/256 [MB] (26 MBps) Copying: 233/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 26 MBps)[2024-07-10 12:31:34.844679] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:25.471 [2024-07-10 12:31:34.859931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.471 [2024-07-10 12:31:34.859977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:25.471 [2024-07-10 12:31:34.859995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:25.471 [2024-07-10 12:31:34.860023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.471 [2024-07-10 12:31:34.860048] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:25.471 [2024-07-10 12:31:34.863868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.471 [2024-07-10 12:31:34.863896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:25.471 [2024-07-10 12:31:34.863908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.801 ms 00:29:25.471 [2024-07-10 12:31:34.863942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.471 [2024-07-10 12:31:34.865692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.471 [2024-07-10 12:31:34.865745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:25.471 [2024-07-10 12:31:34.865758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:29:25.471 [2024-07-10 12:31:34.865769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.471 [2024-07-10 12:31:34.872527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.471 [2024-07-10 12:31:34.872564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:25.471 [2024-07-10 12:31:34.872577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.749 ms 00:29:25.471 [2024-07-10 12:31:34.872587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.471 [2024-07-10 12:31:34.878261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.471 [2024-07-10 12:31:34.878295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:25.471 [2024-07-10 12:31:34.878307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.621 ms 00:29:25.471 [2024-07-10 12:31:34.878317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.471 [2024-07-10 12:31:34.917102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.471 [2024-07-10 12:31:34.917138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:25.471 [2024-07-10 12:31:34.917153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.798 ms 00:29:25.471 [2024-07-10 12:31:34.917164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.471 [2024-07-10 12:31:34.938009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.471 [2024-07-10 12:31:34.938063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:25.471 [2024-07-10 12:31:34.938078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.820 ms 00:29:25.472 [2024-07-10 12:31:34.938089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.472 [2024-07-10 12:31:34.938230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.472 [2024-07-10 12:31:34.938247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:25.472 [2024-07-10 12:31:34.938258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:29:25.472 [2024-07-10 12:31:34.938268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.732 [2024-07-10 12:31:34.977403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.732 [2024-07-10 12:31:34.977451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:25.732 [2024-07-10 12:31:34.977465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.180 ms 00:29:25.732 [2024-07-10 12:31:34.977475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.732 [2024-07-10 12:31:35.016257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.732 [2024-07-10 12:31:35.016299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:25.732 [2024-07-10 12:31:35.016314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.771 ms 00:29:25.732 [2024-07-10 12:31:35.016325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.732 [2024-07-10 12:31:35.055154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.732 [2024-07-10 12:31:35.055204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:25.732 [2024-07-10 12:31:35.055220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.832 ms 00:29:25.732 [2024-07-10 12:31:35.055231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.732 [2024-07-10 12:31:35.094539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.732 [2024-07-10 12:31:35.094586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:25.732 [2024-07-10 12:31:35.094602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.279 ms 00:29:25.732 [2024-07-10 12:31:35.094612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.732 [2024-07-10 12:31:35.094669] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:25.732 [2024-07-10 12:31:35.094688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.094992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:25.732 [2024-07-10 12:31:35.095453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:25.733 [2024-07-10 12:31:35.095816] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:25.733 [2024-07-10 12:31:35.095835] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: be5c4b66-7920-4ed3-b7e3-f52bd4043dbc 00:29:25.733 [2024-07-10 12:31:35.095846] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:25.733 [2024-07-10 12:31:35.095856] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:25.733 [2024-07-10 12:31:35.095865] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:25.733 [2024-07-10 12:31:35.095888] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:25.733 [2024-07-10 12:31:35.095898] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:25.733 [2024-07-10 12:31:35.095909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:25.733 [2024-07-10 12:31:35.095919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:25.733 [2024-07-10 12:31:35.095928] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:25.733 [2024-07-10 12:31:35.095937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:25.733 [2024-07-10 12:31:35.095947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.733 [2024-07-10 12:31:35.095958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:25.733 [2024-07-10 12:31:35.095970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:29:25.733 [2024-07-10 12:31:35.095979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.733 [2024-07-10 12:31:35.116513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.733 [2024-07-10 12:31:35.116551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:25.733 [2024-07-10 12:31:35.116564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.541 ms 00:29:25.733 [2024-07-10 12:31:35.116575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.733 [2024-07-10 12:31:35.117147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:25.733 [2024-07-10 12:31:35.117160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:25.733 [2024-07-10 12:31:35.117171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:29:25.733 [2024-07-10 12:31:35.117188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.733 [2024-07-10 12:31:35.165887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.733 [2024-07-10 12:31:35.165926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:25.733 [2024-07-10 12:31:35.165939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.733 [2024-07-10 12:31:35.165966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.733 [2024-07-10 12:31:35.166040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.733 [2024-07-10 12:31:35.166052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:25.733 [2024-07-10 12:31:35.166063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.733 [2024-07-10 12:31:35.166079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.733 [2024-07-10 12:31:35.166128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.733 [2024-07-10 12:31:35.166140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:25.733 [2024-07-10 12:31:35.166150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.733 [2024-07-10 12:31:35.166161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.733 [2024-07-10 12:31:35.166181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.733 [2024-07-10 12:31:35.166191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:25.733 [2024-07-10 12:31:35.166201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.733 [2024-07-10 12:31:35.166212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.992 [2024-07-10 12:31:35.286928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.992 [2024-07-10 12:31:35.286998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:25.992 [2024-07-10 12:31:35.287016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.992 [2024-07-10 12:31:35.287028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.992 [2024-07-10 12:31:35.390357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.992 [2024-07-10 12:31:35.390426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:25.992 [2024-07-10 12:31:35.390443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.992 [2024-07-10 12:31:35.390454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.992 [2024-07-10 12:31:35.390535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.992 [2024-07-10 12:31:35.390547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:25.992 [2024-07-10 12:31:35.390558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.992 [2024-07-10 12:31:35.390569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.992 [2024-07-10 12:31:35.390600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.992 [2024-07-10 12:31:35.390611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:25.992 [2024-07-10 12:31:35.390622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.992 [2024-07-10 12:31:35.390632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.992 [2024-07-10 12:31:35.390766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.992 [2024-07-10 12:31:35.390780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:25.992 [2024-07-10 12:31:35.390792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.992 [2024-07-10 12:31:35.390803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.992 [2024-07-10 12:31:35.390844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.992 [2024-07-10 12:31:35.390856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:25.992 [2024-07-10 12:31:35.390866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.992 [2024-07-10 12:31:35.390877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.992 [2024-07-10 12:31:35.390920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.992 [2024-07-10 12:31:35.390935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:25.992 [2024-07-10 12:31:35.390946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.992 [2024-07-10 12:31:35.390957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.992 [2024-07-10 12:31:35.391004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:25.992 [2024-07-10 12:31:35.391016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:25.992 [2024-07-10 12:31:35.391026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:25.992 [2024-07-10 12:31:35.391037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:25.992 [2024-07-10 12:31:35.391190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.113 ms, result 0 00:29:27.368 00:29:27.368 00:29:27.368 12:31:36 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=81351 00:29:27.368 12:31:36 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:29:27.368 12:31:36 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 81351 00:29:27.368 12:31:36 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81351 ']' 00:29:27.368 12:31:36 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.368 12:31:36 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:27.368 12:31:36 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.368 12:31:36 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:27.368 12:31:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:29:27.368 [2024-07-10 12:31:36.782741] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:29:27.368 [2024-07-10 12:31:36.782879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81351 ] 00:29:27.627 [2024-07-10 12:31:36.954315] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.885 [2024-07-10 12:31:37.187717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.823 12:31:38 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:28.823 12:31:38 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:29:28.823 12:31:38 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:29:28.823 [2024-07-10 12:31:38.301171] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:28.823 [2024-07-10 12:31:38.301249] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:29.082 [2024-07-10 12:31:38.481878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.082 [2024-07-10 12:31:38.481945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:29.082 [2024-07-10 12:31:38.481961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:29.082 [2024-07-10 12:31:38.481975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.082 [2024-07-10 12:31:38.485433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.082 [2024-07-10 12:31:38.485478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:29.083 [2024-07-10 12:31:38.485491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.442 ms 00:29:29.083 [2024-07-10 12:31:38.485504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.083 [2024-07-10 12:31:38.485604] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:29.083 [2024-07-10 12:31:38.486716] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:29.083 [2024-07-10 12:31:38.486762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.083 [2024-07-10 12:31:38.486777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:29.083 [2024-07-10 12:31:38.486788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:29:29.083 [2024-07-10 12:31:38.486801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.083 [2024-07-10 12:31:38.488289] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:29.083 [2024-07-10 12:31:38.508570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.083 [2024-07-10 12:31:38.508610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:29.083 [2024-07-10 12:31:38.508629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.308 ms 00:29:29.083 [2024-07-10 12:31:38.508640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.083 [2024-07-10 12:31:38.508767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.083 [2024-07-10 12:31:38.508783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:29.083 [2024-07-10 12:31:38.508798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:29.083 [2024-07-10 12:31:38.508808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.083 [2024-07-10 12:31:38.517468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.083 [2024-07-10 12:31:38.517501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:29.083 [2024-07-10 12:31:38.517522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.618 ms 00:29:29.083 [2024-07-10 12:31:38.517533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.083 [2024-07-10 12:31:38.517656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.083 [2024-07-10 12:31:38.517671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:29.083 [2024-07-10 12:31:38.517686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:29:29.083 [2024-07-10 12:31:38.517696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.083 [2024-07-10 12:31:38.517755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.083 [2024-07-10 12:31:38.517767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:29.083 [2024-07-10 12:31:38.517781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:29.083 [2024-07-10 12:31:38.517791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.083 [2024-07-10 12:31:38.517821] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:29.083 [2024-07-10 12:31:38.524002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.083 [2024-07-10 12:31:38.524037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:29.083 [2024-07-10 12:31:38.524049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.201 ms 00:29:29.083 [2024-07-10 12:31:38.524085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.083 [2024-07-10 12:31:38.524155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.083 [2024-07-10 12:31:38.524174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:29.083 [2024-07-10 12:31:38.524186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:29.083 [2024-07-10 12:31:38.524203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.083 [2024-07-10 12:31:38.524225] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:29.083 [2024-07-10 12:31:38.524254] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:29.083 [2024-07-10 12:31:38.524298] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:29.083 [2024-07-10 12:31:38.524322] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:29.083 [2024-07-10 12:31:38.524407] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:29.083 [2024-07-10 12:31:38.524425] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:29.083 [2024-07-10 12:31:38.524442] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:29.083 [2024-07-10 12:31:38.524458] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:29.083 [2024-07-10 12:31:38.524471] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:29.083 [2024-07-10 12:31:38.524485] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:29.083 [2024-07-10 12:31:38.524496] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:29.083 [2024-07-10 12:31:38.524509] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:29.083 [2024-07-10 12:31:38.524519] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:29.083 [2024-07-10 12:31:38.524536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.083 [2024-07-10 12:31:38.524546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:29.083 [2024-07-10 12:31:38.524560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:29:29.083 [2024-07-10 12:31:38.524570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.083 [2024-07-10 12:31:38.524649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.083 [2024-07-10 12:31:38.524660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:29.083 [2024-07-10 12:31:38.524674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:29.083 [2024-07-10 12:31:38.524684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.083 [2024-07-10 12:31:38.524794] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:29.083 [2024-07-10 12:31:38.524809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:29.083 [2024-07-10 12:31:38.524823] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:29.083 [2024-07-10 12:31:38.524834] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.083 [2024-07-10 12:31:38.524848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:29.083 [2024-07-10 12:31:38.524857] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:29.083 [2024-07-10 12:31:38.524872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:29.083 [2024-07-10 12:31:38.524882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:29.083 [2024-07-10 12:31:38.524898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:29.083 [2024-07-10 12:31:38.524907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:29.083 [2024-07-10 12:31:38.524926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:29.083 [2024-07-10 12:31:38.524936] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:29.083 [2024-07-10 12:31:38.524949] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:29.083 [2024-07-10 12:31:38.524959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:29.083 [2024-07-10 12:31:38.524971] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:29.083 [2024-07-10 12:31:38.524980] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.083 [2024-07-10 12:31:38.524992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:29.083 [2024-07-10 12:31:38.525002] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:29.083 [2024-07-10 12:31:38.525014] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.083 [2024-07-10 12:31:38.525023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:29.083 [2024-07-10 12:31:38.525035] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:29.083 [2024-07-10 12:31:38.525045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.083 [2024-07-10 12:31:38.525057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:29.083 [2024-07-10 12:31:38.525066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:29.083 [2024-07-10 12:31:38.525081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.083 [2024-07-10 12:31:38.525090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:29.083 [2024-07-10 12:31:38.525102] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:29.083 [2024-07-10 12:31:38.525120] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.083 [2024-07-10 12:31:38.525132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:29.083 [2024-07-10 12:31:38.525141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:29.083 [2024-07-10 12:31:38.525155] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:29.083 [2024-07-10 12:31:38.525164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:29.083 [2024-07-10 12:31:38.525176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:29.083 [2024-07-10 12:31:38.525185] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:29.083 [2024-07-10 12:31:38.525197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:29.083 [2024-07-10 12:31:38.525206] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:29.083 [2024-07-10 12:31:38.525218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:29.083 [2024-07-10 12:31:38.525228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:29.083 [2024-07-10 12:31:38.525241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:29.083 [2024-07-10 12:31:38.525250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.083 [2024-07-10 12:31:38.525265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:29.083 [2024-07-10 12:31:38.525274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:29.083 [2024-07-10 12:31:38.525287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.083 [2024-07-10 12:31:38.525296] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:29.083 [2024-07-10 12:31:38.525313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:29.083 [2024-07-10 12:31:38.525323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:29.083 [2024-07-10 12:31:38.525335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:29.083 [2024-07-10 12:31:38.525345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:29.083 [2024-07-10 12:31:38.525357] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:29.083 [2024-07-10 12:31:38.525367] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:29.083 [2024-07-10 12:31:38.525379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:29.083 [2024-07-10 12:31:38.525389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:29.083 [2024-07-10 12:31:38.525402] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:29.084 [2024-07-10 12:31:38.525412] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:29.084 [2024-07-10 12:31:38.525427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:29.084 [2024-07-10 12:31:38.525439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:29.084 [2024-07-10 12:31:38.525457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:29.084 [2024-07-10 12:31:38.525468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:29.084 [2024-07-10 12:31:38.525481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:29.084 [2024-07-10 12:31:38.525491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:29.084 [2024-07-10 12:31:38.525504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:29.084 [2024-07-10 12:31:38.525514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:29.084 [2024-07-10 12:31:38.525527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:29.084 [2024-07-10 12:31:38.525538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:29.084 [2024-07-10 12:31:38.525551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:29.084 [2024-07-10 12:31:38.525561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:29.084 [2024-07-10 12:31:38.525573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:29.084 [2024-07-10 12:31:38.525584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:29.084 [2024-07-10 12:31:38.525597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:29.084 [2024-07-10 12:31:38.525607] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:29.084 [2024-07-10 12:31:38.525622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:29.084 [2024-07-10 12:31:38.525633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:29.084 [2024-07-10 12:31:38.525650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:29.084 [2024-07-10 12:31:38.525660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:29.084 [2024-07-10 12:31:38.525673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:29.084 [2024-07-10 12:31:38.525684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.084 [2024-07-10 12:31:38.525699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:29.084 [2024-07-10 12:31:38.525709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:29:29.084 [2024-07-10 12:31:38.525722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.343 [2024-07-10 12:31:38.571544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.343 [2024-07-10 12:31:38.571602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:29.343 [2024-07-10 12:31:38.571619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.816 ms 00:29:29.343 [2024-07-10 12:31:38.571637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.343 [2024-07-10 12:31:38.571802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.343 [2024-07-10 12:31:38.571821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:29.343 [2024-07-10 12:31:38.571832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:29:29.343 [2024-07-10 12:31:38.571846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.343 [2024-07-10 12:31:38.623894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.343 [2024-07-10 12:31:38.623953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:29.343 [2024-07-10 12:31:38.623968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.106 ms 00:29:29.343 [2024-07-10 12:31:38.623981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.343 [2024-07-10 12:31:38.624075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.343 [2024-07-10 12:31:38.624108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:29.343 [2024-07-10 12:31:38.624120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:29.343 [2024-07-10 12:31:38.624133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.343 [2024-07-10 12:31:38.625097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.343 [2024-07-10 12:31:38.625161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:29.343 [2024-07-10 12:31:38.625202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:29:29.343 [2024-07-10 12:31:38.625235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.343 [2024-07-10 12:31:38.625463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.343 [2024-07-10 12:31:38.625507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:29.343 [2024-07-10 12:31:38.625539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:29:29.343 [2024-07-10 12:31:38.625572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.343 [2024-07-10 12:31:38.651833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.343 [2024-07-10 12:31:38.652018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:29.343 [2024-07-10 12:31:38.652192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.202 ms 00:29:29.343 [2024-07-10 12:31:38.652235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.343 [2024-07-10 12:31:38.673691] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:29:29.343 [2024-07-10 12:31:38.673881] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:29.343 [2024-07-10 12:31:38.673904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.343 [2024-07-10 12:31:38.673919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:29.343 [2024-07-10 12:31:38.673932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.550 ms 00:29:29.343 [2024-07-10 12:31:38.673945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.343 [2024-07-10 12:31:38.703528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.343 [2024-07-10 12:31:38.703572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:29.343 [2024-07-10 12:31:38.703586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.555 ms 00:29:29.343 [2024-07-10 12:31:38.703600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.343 [2024-07-10 12:31:38.722554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.343 [2024-07-10 12:31:38.722594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:29.343 [2024-07-10 12:31:38.722619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.906 ms 00:29:29.343 [2024-07-10 12:31:38.722636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.343 [2024-07-10 12:31:38.742410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.343 [2024-07-10 12:31:38.742452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:29.343 [2024-07-10 12:31:38.742465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.733 ms 00:29:29.343 [2024-07-10 12:31:38.742478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.343 [2024-07-10 12:31:38.743390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.343 [2024-07-10 12:31:38.743425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:29.343 [2024-07-10 12:31:38.743438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:29:29.343 [2024-07-10 12:31:38.743451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.603 [2024-07-10 12:31:38.847544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.603 [2024-07-10 12:31:38.847631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:29.603 [2024-07-10 12:31:38.847652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.233 ms 00:29:29.603 [2024-07-10 12:31:38.847666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.603 [2024-07-10 12:31:38.860385] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:29.603 [2024-07-10 12:31:38.886771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.603 [2024-07-10 12:31:38.886841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:29.603 [2024-07-10 12:31:38.886881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.024 ms 00:29:29.603 [2024-07-10 12:31:38.886897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.603 [2024-07-10 12:31:38.887017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.603 [2024-07-10 12:31:38.887030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:29.603 [2024-07-10 12:31:38.887046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:29.603 [2024-07-10 12:31:38.887056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.603 [2024-07-10 12:31:38.887122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.603 [2024-07-10 12:31:38.887134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:29.603 [2024-07-10 12:31:38.887148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:29:29.603 [2024-07-10 12:31:38.887158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.603 [2024-07-10 12:31:38.887192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.603 [2024-07-10 12:31:38.887202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:29.603 [2024-07-10 12:31:38.887219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:29.603 [2024-07-10 12:31:38.887229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.603 [2024-07-10 12:31:38.887267] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:29.603 [2024-07-10 12:31:38.887279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.603 [2024-07-10 12:31:38.887296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:29.603 [2024-07-10 12:31:38.887306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:29.603 [2024-07-10 12:31:38.887319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.603 [2024-07-10 12:31:38.926304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.603 [2024-07-10 12:31:38.926352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:29.603 [2024-07-10 12:31:38.926368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.022 ms 00:29:29.603 [2024-07-10 12:31:38.926382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.603 [2024-07-10 12:31:38.926498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.603 [2024-07-10 12:31:38.926515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:29.603 [2024-07-10 12:31:38.926527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:29.603 [2024-07-10 12:31:38.926540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.603 [2024-07-10 12:31:38.927768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:29.603 [2024-07-10 12:31:38.933257] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 446.269 ms, result 0 00:29:29.603 [2024-07-10 12:31:38.934455] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:29.603 Some configs were skipped because the RPC state that can call them passed over. 00:29:29.603 12:31:38 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:29:29.863 [2024-07-10 12:31:39.170081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.863 [2024-07-10 12:31:39.170265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:29:29.863 [2024-07-10 12:31:39.170354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.419 ms 00:29:29.863 [2024-07-10 12:31:39.170391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.863 [2024-07-10 12:31:39.170468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.813 ms, result 0 00:29:29.863 true 00:29:29.863 12:31:39 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:29:30.122 [2024-07-10 12:31:39.361779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.122 [2024-07-10 12:31:39.362048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:29:30.122 [2024-07-10 12:31:39.362130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.270 ms 00:29:30.122 [2024-07-10 12:31:39.362170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.122 [2024-07-10 12:31:39.362249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.740 ms, result 0 00:29:30.122 true 00:29:30.122 12:31:39 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 81351 00:29:30.122 12:31:39 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81351 ']' 00:29:30.122 12:31:39 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81351 00:29:30.122 12:31:39 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:29:30.122 12:31:39 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:30.122 12:31:39 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81351 00:29:30.122 killing process with pid 81351 00:29:30.122 12:31:39 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:30.122 12:31:39 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:30.122 12:31:39 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81351' 00:29:30.122 12:31:39 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81351 00:29:30.122 12:31:39 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81351 00:29:31.502 [2024-07-10 12:31:40.546271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.502 [2024-07-10 12:31:40.546344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:31.502 [2024-07-10 12:31:40.546362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:31.502 [2024-07-10 12:31:40.546389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.502 [2024-07-10 12:31:40.546416] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:31.502 [2024-07-10 12:31:40.550797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.502 [2024-07-10 12:31:40.550849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:31.502 [2024-07-10 12:31:40.550863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.370 ms 00:29:31.502 [2024-07-10 12:31:40.550878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.502 [2024-07-10 12:31:40.551135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.502 [2024-07-10 12:31:40.551150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:31.502 [2024-07-10 12:31:40.551160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:29:31.502 [2024-07-10 12:31:40.551172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.502 [2024-07-10 12:31:40.554568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.502 [2024-07-10 12:31:40.554611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:31.502 [2024-07-10 12:31:40.554627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.382 ms 00:29:31.502 [2024-07-10 12:31:40.554639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.502 [2024-07-10 12:31:40.560225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.502 [2024-07-10 12:31:40.560266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:31.502 [2024-07-10 12:31:40.560279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.558 ms 00:29:31.502 [2024-07-10 12:31:40.560294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.502 [2024-07-10 12:31:40.576034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.502 [2024-07-10 12:31:40.576080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:31.502 [2024-07-10 12:31:40.576095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.708 ms 00:29:31.502 [2024-07-10 12:31:40.576111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.502 [2024-07-10 12:31:40.586198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.502 [2024-07-10 12:31:40.586240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:31.503 [2024-07-10 12:31:40.586257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.030 ms 00:29:31.503 [2024-07-10 12:31:40.586270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.503 [2024-07-10 12:31:40.586422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.503 [2024-07-10 12:31:40.586439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:31.503 [2024-07-10 12:31:40.586450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:29:31.503 [2024-07-10 12:31:40.586475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.503 [2024-07-10 12:31:40.603191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.503 [2024-07-10 12:31:40.603228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:31.503 [2024-07-10 12:31:40.603241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.724 ms 00:29:31.503 [2024-07-10 12:31:40.603253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.503 [2024-07-10 12:31:40.618871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.503 [2024-07-10 12:31:40.618907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:31.503 [2024-07-10 12:31:40.618919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.579 ms 00:29:31.503 [2024-07-10 12:31:40.618938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.503 [2024-07-10 12:31:40.633948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.503 [2024-07-10 12:31:40.633985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:31.503 [2024-07-10 12:31:40.633997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.988 ms 00:29:31.503 [2024-07-10 12:31:40.634025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.503 [2024-07-10 12:31:40.649845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.503 [2024-07-10 12:31:40.649882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:31.503 [2024-07-10 12:31:40.649894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.784 ms 00:29:31.503 [2024-07-10 12:31:40.649907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.503 [2024-07-10 12:31:40.649953] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:31.503 [2024-07-10 12:31:40.649974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.649988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:31.503 [2024-07-10 12:31:40.650696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.650998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:31.504 [2024-07-10 12:31:40.651256] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:31.504 [2024-07-10 12:31:40.651267] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: be5c4b66-7920-4ed3-b7e3-f52bd4043dbc 00:29:31.504 [2024-07-10 12:31:40.651287] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:31.504 [2024-07-10 12:31:40.651298] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:31.504 [2024-07-10 12:31:40.651310] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:31.504 [2024-07-10 12:31:40.651321] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:31.504 [2024-07-10 12:31:40.651334] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:31.504 [2024-07-10 12:31:40.651344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:31.504 [2024-07-10 12:31:40.651357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:31.504 [2024-07-10 12:31:40.651366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:31.504 [2024-07-10 12:31:40.651389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:31.504 [2024-07-10 12:31:40.651399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.504 [2024-07-10 12:31:40.651412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:31.504 [2024-07-10 12:31:40.651423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.450 ms 00:29:31.504 [2024-07-10 12:31:40.651436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.504 [2024-07-10 12:31:40.673540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.504 [2024-07-10 12:31:40.673577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:31.504 [2024-07-10 12:31:40.673606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.116 ms 00:29:31.504 [2024-07-10 12:31:40.673623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.504 [2024-07-10 12:31:40.674241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.504 [2024-07-10 12:31:40.674268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:31.504 [2024-07-10 12:31:40.674284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:29:31.504 [2024-07-10 12:31:40.674301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.504 [2024-07-10 12:31:40.743831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.504 [2024-07-10 12:31:40.743873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:31.504 [2024-07-10 12:31:40.743887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.504 [2024-07-10 12:31:40.743900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.504 [2024-07-10 12:31:40.743987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.504 [2024-07-10 12:31:40.744003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:31.504 [2024-07-10 12:31:40.744015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.504 [2024-07-10 12:31:40.744032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.504 [2024-07-10 12:31:40.744096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.504 [2024-07-10 12:31:40.744114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:31.504 [2024-07-10 12:31:40.744125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.504 [2024-07-10 12:31:40.744142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.504 [2024-07-10 12:31:40.744162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.504 [2024-07-10 12:31:40.744175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:31.504 [2024-07-10 12:31:40.744186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.504 [2024-07-10 12:31:40.744198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.504 [2024-07-10 12:31:40.868972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.505 [2024-07-10 12:31:40.869041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:31.505 [2024-07-10 12:31:40.869058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.505 [2024-07-10 12:31:40.869071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.505 [2024-07-10 12:31:40.973656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.505 [2024-07-10 12:31:40.973725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:31.505 [2024-07-10 12:31:40.973757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.505 [2024-07-10 12:31:40.973771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.505 [2024-07-10 12:31:40.973867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.505 [2024-07-10 12:31:40.973883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:31.505 [2024-07-10 12:31:40.973894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.505 [2024-07-10 12:31:40.973911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.505 [2024-07-10 12:31:40.973944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.505 [2024-07-10 12:31:40.973958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:31.505 [2024-07-10 12:31:40.973968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.505 [2024-07-10 12:31:40.973981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.505 [2024-07-10 12:31:40.974108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.505 [2024-07-10 12:31:40.974124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:31.505 [2024-07-10 12:31:40.974135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.505 [2024-07-10 12:31:40.974148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.505 [2024-07-10 12:31:40.974189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.505 [2024-07-10 12:31:40.974204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:31.505 [2024-07-10 12:31:40.974215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.505 [2024-07-10 12:31:40.974228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.505 [2024-07-10 12:31:40.974273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.505 [2024-07-10 12:31:40.974290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:31.505 [2024-07-10 12:31:40.974301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.505 [2024-07-10 12:31:40.974317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.505 [2024-07-10 12:31:40.974363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.505 [2024-07-10 12:31:40.974378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:31.505 [2024-07-10 12:31:40.974388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.505 [2024-07-10 12:31:40.974401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.505 [2024-07-10 12:31:40.974552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 428.958 ms, result 0 00:29:32.882 12:31:42 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:29:32.882 12:31:42 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:32.882 [2024-07-10 12:31:42.135008] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:29:32.882 [2024-07-10 12:31:42.135137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81422 ] 00:29:32.882 [2024-07-10 12:31:42.307059] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.141 [2024-07-10 12:31:42.541363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.707 [2024-07-10 12:31:42.944347] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:33.707 [2024-07-10 12:31:42.944425] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:33.707 [2024-07-10 12:31:43.107793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.707 [2024-07-10 12:31:43.107844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:33.707 [2024-07-10 12:31:43.107860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:33.707 [2024-07-10 12:31:43.107871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.707 [2024-07-10 12:31:43.111046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.707 [2024-07-10 12:31:43.111084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:33.707 [2024-07-10 12:31:43.111098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.159 ms 00:29:33.707 [2024-07-10 12:31:43.111108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.707 [2024-07-10 12:31:43.111207] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:33.707 [2024-07-10 12:31:43.112400] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:33.707 [2024-07-10 12:31:43.112434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.707 [2024-07-10 12:31:43.112446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:33.707 [2024-07-10 12:31:43.112457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:29:33.707 [2024-07-10 12:31:43.112467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.707 [2024-07-10 12:31:43.114389] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:33.707 [2024-07-10 12:31:43.134533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.707 [2024-07-10 12:31:43.134570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:33.707 [2024-07-10 12:31:43.134590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.176 ms 00:29:33.707 [2024-07-10 12:31:43.134600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.707 [2024-07-10 12:31:43.134697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.707 [2024-07-10 12:31:43.134711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:33.707 [2024-07-10 12:31:43.134723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:33.707 [2024-07-10 12:31:43.134750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.707 [2024-07-10 12:31:43.142979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.707 [2024-07-10 12:31:43.143011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:33.707 [2024-07-10 12:31:43.143024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.198 ms 00:29:33.707 [2024-07-10 12:31:43.143034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.707 [2024-07-10 12:31:43.143132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.707 [2024-07-10 12:31:43.143147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:33.707 [2024-07-10 12:31:43.143159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:33.707 [2024-07-10 12:31:43.143169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.707 [2024-07-10 12:31:43.143203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.707 [2024-07-10 12:31:43.143214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:33.707 [2024-07-10 12:31:43.143225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:33.707 [2024-07-10 12:31:43.143238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.707 [2024-07-10 12:31:43.143262] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:33.707 [2024-07-10 12:31:43.149191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.707 [2024-07-10 12:31:43.149242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:33.707 [2024-07-10 12:31:43.149255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.945 ms 00:29:33.707 [2024-07-10 12:31:43.149266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.707 [2024-07-10 12:31:43.149338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.707 [2024-07-10 12:31:43.149351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:33.707 [2024-07-10 12:31:43.149362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:33.707 [2024-07-10 12:31:43.149372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.707 [2024-07-10 12:31:43.149394] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:33.708 [2024-07-10 12:31:43.149430] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:33.708 [2024-07-10 12:31:43.149469] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:33.708 [2024-07-10 12:31:43.149486] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:33.708 [2024-07-10 12:31:43.149586] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:33.708 [2024-07-10 12:31:43.149599] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:33.708 [2024-07-10 12:31:43.149613] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:33.708 [2024-07-10 12:31:43.149627] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:33.708 [2024-07-10 12:31:43.149639] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:33.708 [2024-07-10 12:31:43.149650] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:33.708 [2024-07-10 12:31:43.149664] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:33.708 [2024-07-10 12:31:43.149675] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:33.708 [2024-07-10 12:31:43.149685] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:33.708 [2024-07-10 12:31:43.149696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.708 [2024-07-10 12:31:43.149706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:33.708 [2024-07-10 12:31:43.149717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:29:33.708 [2024-07-10 12:31:43.149727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.708 [2024-07-10 12:31:43.149813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.708 [2024-07-10 12:31:43.149826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:33.708 [2024-07-10 12:31:43.149837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:33.708 [2024-07-10 12:31:43.149850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.708 [2024-07-10 12:31:43.149936] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:33.708 [2024-07-10 12:31:43.149949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:33.708 [2024-07-10 12:31:43.149960] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:33.708 [2024-07-10 12:31:43.149970] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.708 [2024-07-10 12:31:43.149981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:33.708 [2024-07-10 12:31:43.149990] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:33.708 [2024-07-10 12:31:43.150000] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:33.708 [2024-07-10 12:31:43.150011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:33.708 [2024-07-10 12:31:43.150028] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:33.708 [2024-07-10 12:31:43.150037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:33.708 [2024-07-10 12:31:43.150046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:33.708 [2024-07-10 12:31:43.150056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:33.708 [2024-07-10 12:31:43.150065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:33.708 [2024-07-10 12:31:43.150074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:33.708 [2024-07-10 12:31:43.150085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:33.708 [2024-07-10 12:31:43.150095] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.708 [2024-07-10 12:31:43.150105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:33.708 [2024-07-10 12:31:43.150115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:33.708 [2024-07-10 12:31:43.150135] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.708 [2024-07-10 12:31:43.150145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:33.708 [2024-07-10 12:31:43.150154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:33.708 [2024-07-10 12:31:43.150163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.708 [2024-07-10 12:31:43.150173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:33.708 [2024-07-10 12:31:43.150182] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:33.708 [2024-07-10 12:31:43.150191] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.708 [2024-07-10 12:31:43.150201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:33.708 [2024-07-10 12:31:43.150210] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:33.708 [2024-07-10 12:31:43.150220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.708 [2024-07-10 12:31:43.150229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:33.708 [2024-07-10 12:31:43.150238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:33.708 [2024-07-10 12:31:43.150247] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.708 [2024-07-10 12:31:43.150256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:33.708 [2024-07-10 12:31:43.150266] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:33.708 [2024-07-10 12:31:43.150275] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:33.708 [2024-07-10 12:31:43.150285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:33.708 [2024-07-10 12:31:43.150294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:33.708 [2024-07-10 12:31:43.150303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:33.708 [2024-07-10 12:31:43.150312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:33.708 [2024-07-10 12:31:43.150321] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:33.708 [2024-07-10 12:31:43.150330] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.708 [2024-07-10 12:31:43.150339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:33.708 [2024-07-10 12:31:43.150348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:33.708 [2024-07-10 12:31:43.150357] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.708 [2024-07-10 12:31:43.150366] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:33.708 [2024-07-10 12:31:43.150375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:33.708 [2024-07-10 12:31:43.150385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:33.708 [2024-07-10 12:31:43.150395] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.708 [2024-07-10 12:31:43.150405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:33.708 [2024-07-10 12:31:43.150415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:33.708 [2024-07-10 12:31:43.150424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:33.708 [2024-07-10 12:31:43.150434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:33.708 [2024-07-10 12:31:43.150442] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:33.708 [2024-07-10 12:31:43.150451] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:33.708 [2024-07-10 12:31:43.150461] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:33.708 [2024-07-10 12:31:43.150478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:33.708 [2024-07-10 12:31:43.150490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:33.708 [2024-07-10 12:31:43.150500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:33.708 [2024-07-10 12:31:43.150511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:33.708 [2024-07-10 12:31:43.150521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:33.708 [2024-07-10 12:31:43.150532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:33.708 [2024-07-10 12:31:43.150542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:33.708 [2024-07-10 12:31:43.150552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:33.708 [2024-07-10 12:31:43.150562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:33.708 [2024-07-10 12:31:43.150572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:33.708 [2024-07-10 12:31:43.150583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:33.708 [2024-07-10 12:31:43.150592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:33.708 [2024-07-10 12:31:43.150602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:33.708 [2024-07-10 12:31:43.150613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:33.708 [2024-07-10 12:31:43.150623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:33.708 [2024-07-10 12:31:43.150633] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:33.708 [2024-07-10 12:31:43.150644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:33.708 [2024-07-10 12:31:43.150655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:33.708 [2024-07-10 12:31:43.150665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:33.708 [2024-07-10 12:31:43.150676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:33.708 [2024-07-10 12:31:43.150686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:33.708 [2024-07-10 12:31:43.150697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.708 [2024-07-10 12:31:43.150708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:33.708 [2024-07-10 12:31:43.150718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:29:33.708 [2024-07-10 12:31:43.150738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.966 [2024-07-10 12:31:43.205351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.966 [2024-07-10 12:31:43.205415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:33.966 [2024-07-10 12:31:43.205445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.640 ms 00:29:33.966 [2024-07-10 12:31:43.205455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.966 [2024-07-10 12:31:43.205665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.966 [2024-07-10 12:31:43.205679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:33.966 [2024-07-10 12:31:43.205691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:33.966 [2024-07-10 12:31:43.205706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.966 [2024-07-10 12:31:43.259289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.966 [2024-07-10 12:31:43.259346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:33.966 [2024-07-10 12:31:43.259362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.642 ms 00:29:33.966 [2024-07-10 12:31:43.259389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.966 [2024-07-10 12:31:43.259507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.966 [2024-07-10 12:31:43.259521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:33.966 [2024-07-10 12:31:43.259532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:33.966 [2024-07-10 12:31:43.259543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.966 [2024-07-10 12:31:43.260017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.966 [2024-07-10 12:31:43.260033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:33.966 [2024-07-10 12:31:43.260044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:29:33.966 [2024-07-10 12:31:43.260054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.966 [2024-07-10 12:31:43.260187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.966 [2024-07-10 12:31:43.260205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:33.966 [2024-07-10 12:31:43.260216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:29:33.966 [2024-07-10 12:31:43.260226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.966 [2024-07-10 12:31:43.282580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.966 [2024-07-10 12:31:43.282622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:33.966 [2024-07-10 12:31:43.282636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.368 ms 00:29:33.966 [2024-07-10 12:31:43.282662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.966 [2024-07-10 12:31:43.305342] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:29:33.966 [2024-07-10 12:31:43.305391] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:33.966 [2024-07-10 12:31:43.305408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.966 [2024-07-10 12:31:43.305436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:33.966 [2024-07-10 12:31:43.305460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.626 ms 00:29:33.966 [2024-07-10 12:31:43.305469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.966 [2024-07-10 12:31:43.334878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.966 [2024-07-10 12:31:43.334926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:33.966 [2024-07-10 12:31:43.334941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.341 ms 00:29:33.966 [2024-07-10 12:31:43.334967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.966 [2024-07-10 12:31:43.355107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.966 [2024-07-10 12:31:43.355144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:33.966 [2024-07-10 12:31:43.355158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.076 ms 00:29:33.966 [2024-07-10 12:31:43.355184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.966 [2024-07-10 12:31:43.373759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.966 [2024-07-10 12:31:43.373793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:33.966 [2024-07-10 12:31:43.373806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.525 ms 00:29:33.966 [2024-07-10 12:31:43.373831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.966 [2024-07-10 12:31:43.374635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.966 [2024-07-10 12:31:43.374693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:33.966 [2024-07-10 12:31:43.374706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:29:33.966 [2024-07-10 12:31:43.374716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.225 [2024-07-10 12:31:43.467651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.225 [2024-07-10 12:31:43.467757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:34.225 [2024-07-10 12:31:43.467793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.041 ms 00:29:34.225 [2024-07-10 12:31:43.467804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.225 [2024-07-10 12:31:43.479684] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:34.225 [2024-07-10 12:31:43.505074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.225 [2024-07-10 12:31:43.505143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:34.225 [2024-07-10 12:31:43.505161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.165 ms 00:29:34.225 [2024-07-10 12:31:43.505172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.225 [2024-07-10 12:31:43.505292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.225 [2024-07-10 12:31:43.505306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:34.225 [2024-07-10 12:31:43.505323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:34.225 [2024-07-10 12:31:43.505334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.225 [2024-07-10 12:31:43.505395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.225 [2024-07-10 12:31:43.505408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:34.225 [2024-07-10 12:31:43.505418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:34.225 [2024-07-10 12:31:43.505429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.225 [2024-07-10 12:31:43.505454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.225 [2024-07-10 12:31:43.505466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:34.225 [2024-07-10 12:31:43.505477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:34.225 [2024-07-10 12:31:43.505491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.225 [2024-07-10 12:31:43.505527] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:34.225 [2024-07-10 12:31:43.505539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.225 [2024-07-10 12:31:43.505549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:34.225 [2024-07-10 12:31:43.505560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:34.225 [2024-07-10 12:31:43.505570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.225 [2024-07-10 12:31:43.546583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.225 [2024-07-10 12:31:43.546644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:34.225 [2024-07-10 12:31:43.546684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.055 ms 00:29:34.225 [2024-07-10 12:31:43.546696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.225 [2024-07-10 12:31:43.546836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.225 [2024-07-10 12:31:43.546851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:34.225 [2024-07-10 12:31:43.546863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:34.225 [2024-07-10 12:31:43.546873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.225 [2024-07-10 12:31:43.547831] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:34.225 [2024-07-10 12:31:43.552702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 440.423 ms, result 0 00:29:34.225 [2024-07-10 12:31:43.553486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:34.225 [2024-07-10 12:31:43.571960] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:43.429  Copying: 29/256 [MB] (29 MBps) Copying: 55/256 [MB] (26 MBps) Copying: 83/256 [MB] (27 MBps) Copying: 110/256 [MB] (27 MBps) Copying: 138/256 [MB] (27 MBps) Copying: 165/256 [MB] (27 MBps) Copying: 193/256 [MB] (27 MBps) Copying: 220/256 [MB] (26 MBps) Copying: 247/256 [MB] (26 MBps) Copying: 256/256 [MB] (average 27 MBps)[2024-07-10 12:31:52.885041] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:43.703 [2024-07-10 12:31:52.900631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.703 [2024-07-10 12:31:52.900686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:43.703 [2024-07-10 12:31:52.900704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:43.703 [2024-07-10 12:31:52.900715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.703 [2024-07-10 12:31:52.900752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:43.703 [2024-07-10 12:31:52.904519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.703 [2024-07-10 12:31:52.904558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:43.703 [2024-07-10 12:31:52.904571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.756 ms 00:29:43.703 [2024-07-10 12:31:52.904582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.703 [2024-07-10 12:31:52.904812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.703 [2024-07-10 12:31:52.904825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:43.703 [2024-07-10 12:31:52.904837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:29:43.703 [2024-07-10 12:31:52.904848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.703 [2024-07-10 12:31:52.907705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.703 [2024-07-10 12:31:52.907726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:43.703 [2024-07-10 12:31:52.907748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.845 ms 00:29:43.703 [2024-07-10 12:31:52.907764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.703 [2024-07-10 12:31:52.913341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.703 [2024-07-10 12:31:52.913372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:43.703 [2024-07-10 12:31:52.913385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.566 ms 00:29:43.703 [2024-07-10 12:31:52.913395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.703 [2024-07-10 12:31:52.951779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.703 [2024-07-10 12:31:52.951817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:43.703 [2024-07-10 12:31:52.951833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.379 ms 00:29:43.703 [2024-07-10 12:31:52.951843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.703 [2024-07-10 12:31:52.973158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.703 [2024-07-10 12:31:52.973197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:43.703 [2024-07-10 12:31:52.973213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.290 ms 00:29:43.703 [2024-07-10 12:31:52.973234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.703 [2024-07-10 12:31:52.973399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.703 [2024-07-10 12:31:52.973413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:43.703 [2024-07-10 12:31:52.973424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:29:43.703 [2024-07-10 12:31:52.973434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.703 [2024-07-10 12:31:53.012504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.703 [2024-07-10 12:31:53.012539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:43.703 [2024-07-10 12:31:53.012553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.114 ms 00:29:43.703 [2024-07-10 12:31:53.012563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.703 [2024-07-10 12:31:53.050888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.703 [2024-07-10 12:31:53.050923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:43.703 [2024-07-10 12:31:53.050936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.330 ms 00:29:43.703 [2024-07-10 12:31:53.050946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.703 [2024-07-10 12:31:53.088740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.703 [2024-07-10 12:31:53.088775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:43.703 [2024-07-10 12:31:53.088789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.801 ms 00:29:43.703 [2024-07-10 12:31:53.088799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.703 [2024-07-10 12:31:53.125746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.703 [2024-07-10 12:31:53.125787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:43.703 [2024-07-10 12:31:53.125800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.923 ms 00:29:43.703 [2024-07-10 12:31:53.125810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.703 [2024-07-10 12:31:53.125866] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:43.703 [2024-07-10 12:31:53.125883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.125904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.125916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.125927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.125939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.125949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.125960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.125971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.125983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.125995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:43.703 [2024-07-10 12:31:53.126818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:43.704 [2024-07-10 12:31:53.126998] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:43.704 [2024-07-10 12:31:53.127008] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: be5c4b66-7920-4ed3-b7e3-f52bd4043dbc 00:29:43.704 [2024-07-10 12:31:53.127019] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:43.704 [2024-07-10 12:31:53.127030] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:43.704 [2024-07-10 12:31:53.127051] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:43.704 [2024-07-10 12:31:53.127062] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:43.704 [2024-07-10 12:31:53.127072] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:43.704 [2024-07-10 12:31:53.127082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:43.704 [2024-07-10 12:31:53.127092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:43.704 [2024-07-10 12:31:53.127102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:43.704 [2024-07-10 12:31:53.127111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:43.704 [2024-07-10 12:31:53.127120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.704 [2024-07-10 12:31:53.127130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:43.704 [2024-07-10 12:31:53.127141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:29:43.704 [2024-07-10 12:31:53.127155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.704 [2024-07-10 12:31:53.148661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.704 [2024-07-10 12:31:53.148696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:43.704 [2024-07-10 12:31:53.148709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.520 ms 00:29:43.704 [2024-07-10 12:31:53.148721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.704 [2024-07-10 12:31:53.149323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:43.704 [2024-07-10 12:31:53.149345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:43.704 [2024-07-10 12:31:53.149362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:29:43.704 [2024-07-10 12:31:53.149373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.961 [2024-07-10 12:31:53.200318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.961 [2024-07-10 12:31:53.200359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:43.961 [2024-07-10 12:31:53.200373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.961 [2024-07-10 12:31:53.200385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.961 [2024-07-10 12:31:53.200464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.961 [2024-07-10 12:31:53.200476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:43.961 [2024-07-10 12:31:53.200493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.961 [2024-07-10 12:31:53.200504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.961 [2024-07-10 12:31:53.200554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.961 [2024-07-10 12:31:53.200567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:43.961 [2024-07-10 12:31:53.200578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.961 [2024-07-10 12:31:53.200588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.961 [2024-07-10 12:31:53.200609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.961 [2024-07-10 12:31:53.200620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:43.961 [2024-07-10 12:31:53.200630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.961 [2024-07-10 12:31:53.200645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.961 [2024-07-10 12:31:53.328924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.961 [2024-07-10 12:31:53.328984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:43.961 [2024-07-10 12:31:53.329001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.961 [2024-07-10 12:31:53.329013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.961 [2024-07-10 12:31:53.432784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.961 [2024-07-10 12:31:53.432848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:43.961 [2024-07-10 12:31:53.432865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.961 [2024-07-10 12:31:53.432883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.961 [2024-07-10 12:31:53.432970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.961 [2024-07-10 12:31:53.432983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:43.961 [2024-07-10 12:31:53.432993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.961 [2024-07-10 12:31:53.433004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.961 [2024-07-10 12:31:53.433037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.961 [2024-07-10 12:31:53.433048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:43.961 [2024-07-10 12:31:53.433059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.961 [2024-07-10 12:31:53.433069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.961 [2024-07-10 12:31:53.433185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.961 [2024-07-10 12:31:53.433199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:43.961 [2024-07-10 12:31:53.433210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.961 [2024-07-10 12:31:53.433221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.961 [2024-07-10 12:31:53.433265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.961 [2024-07-10 12:31:53.433277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:43.961 [2024-07-10 12:31:53.433288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.961 [2024-07-10 12:31:53.433298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.961 [2024-07-10 12:31:53.433345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.961 [2024-07-10 12:31:53.433356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:43.961 [2024-07-10 12:31:53.433367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.961 [2024-07-10 12:31:53.433377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.961 [2024-07-10 12:31:53.433424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:43.961 [2024-07-10 12:31:53.433435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:43.961 [2024-07-10 12:31:53.433446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:43.961 [2024-07-10 12:31:53.433456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:43.961 [2024-07-10 12:31:53.433606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.832 ms, result 0 00:29:45.336 00:29:45.336 00:29:45.336 12:31:54 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:29:45.336 12:31:54 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:29:45.902 12:31:55 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:45.902 [2024-07-10 12:31:55.236352] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:29:45.902 [2024-07-10 12:31:55.236531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81554 ] 00:29:46.159 [2024-07-10 12:31:55.400760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.417 [2024-07-10 12:31:55.655605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.675 [2024-07-10 12:31:56.065293] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:46.675 [2024-07-10 12:31:56.065375] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:46.935 [2024-07-10 12:31:56.228778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.935 [2024-07-10 12:31:56.228845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:46.935 [2024-07-10 12:31:56.228862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:46.935 [2024-07-10 12:31:56.228873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.935 [2024-07-10 12:31:56.232164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.935 [2024-07-10 12:31:56.232205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:46.935 [2024-07-10 12:31:56.232219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.274 ms 00:29:46.935 [2024-07-10 12:31:56.232229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.935 [2024-07-10 12:31:56.232332] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:46.935 [2024-07-10 12:31:56.233579] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:46.935 [2024-07-10 12:31:56.233611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.935 [2024-07-10 12:31:56.233625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:46.935 [2024-07-10 12:31:56.233637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.289 ms 00:29:46.935 [2024-07-10 12:31:56.233647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.935 [2024-07-10 12:31:56.235754] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:46.935 [2024-07-10 12:31:56.257835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.935 [2024-07-10 12:31:56.257894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:46.935 [2024-07-10 12:31:56.257917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.117 ms 00:29:46.935 [2024-07-10 12:31:56.257929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.935 [2024-07-10 12:31:56.258034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.935 [2024-07-10 12:31:56.258049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:46.935 [2024-07-10 12:31:56.258061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:46.935 [2024-07-10 12:31:56.258072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.935 [2024-07-10 12:31:56.265281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.935 [2024-07-10 12:31:56.265316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:46.935 [2024-07-10 12:31:56.265328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.176 ms 00:29:46.935 [2024-07-10 12:31:56.265339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.935 [2024-07-10 12:31:56.265439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.935 [2024-07-10 12:31:56.265455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:46.935 [2024-07-10 12:31:56.265466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:46.935 [2024-07-10 12:31:56.265476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.935 [2024-07-10 12:31:56.265511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.935 [2024-07-10 12:31:56.265523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:46.935 [2024-07-10 12:31:56.265535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:46.935 [2024-07-10 12:31:56.265547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.935 [2024-07-10 12:31:56.265573] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:46.935 [2024-07-10 12:31:56.271309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.935 [2024-07-10 12:31:56.271341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:46.935 [2024-07-10 12:31:56.271354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.752 ms 00:29:46.935 [2024-07-10 12:31:56.271365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.935 [2024-07-10 12:31:56.271435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.935 [2024-07-10 12:31:56.271448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:46.935 [2024-07-10 12:31:56.271460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:46.935 [2024-07-10 12:31:56.271470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.935 [2024-07-10 12:31:56.271491] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:46.935 [2024-07-10 12:31:56.271515] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:46.935 [2024-07-10 12:31:56.271554] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:46.935 [2024-07-10 12:31:56.271572] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:46.935 [2024-07-10 12:31:56.271656] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:46.935 [2024-07-10 12:31:56.271670] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:46.935 [2024-07-10 12:31:56.271683] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:46.935 [2024-07-10 12:31:56.271696] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:46.935 [2024-07-10 12:31:56.271708] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:46.935 [2024-07-10 12:31:56.271720] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:46.935 [2024-07-10 12:31:56.271745] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:46.935 [2024-07-10 12:31:56.271756] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:46.935 [2024-07-10 12:31:56.271766] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:46.935 [2024-07-10 12:31:56.271777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.935 [2024-07-10 12:31:56.271787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:46.935 [2024-07-10 12:31:56.271798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:29:46.935 [2024-07-10 12:31:56.271808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.935 [2024-07-10 12:31:56.271881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.935 [2024-07-10 12:31:56.271892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:46.935 [2024-07-10 12:31:56.271903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:46.935 [2024-07-10 12:31:56.271916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.935 [2024-07-10 12:31:56.272000] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:46.935 [2024-07-10 12:31:56.272013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:46.936 [2024-07-10 12:31:56.272024] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:46.936 [2024-07-10 12:31:56.272036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:46.936 [2024-07-10 12:31:56.272047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:46.936 [2024-07-10 12:31:56.272056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:46.936 [2024-07-10 12:31:56.272076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:46.936 [2024-07-10 12:31:56.272086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:46.936 [2024-07-10 12:31:56.272096] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:46.936 [2024-07-10 12:31:56.272106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:46.936 [2024-07-10 12:31:56.272115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:46.936 [2024-07-10 12:31:56.272125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:46.936 [2024-07-10 12:31:56.272134] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:46.936 [2024-07-10 12:31:56.272143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:46.936 [2024-07-10 12:31:56.272153] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:46.936 [2024-07-10 12:31:56.272162] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:46.936 [2024-07-10 12:31:56.272172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:46.936 [2024-07-10 12:31:56.272181] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:46.936 [2024-07-10 12:31:56.272202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:46.936 [2024-07-10 12:31:56.272211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:46.936 [2024-07-10 12:31:56.272222] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:46.936 [2024-07-10 12:31:56.272232] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:46.936 [2024-07-10 12:31:56.272241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:46.936 [2024-07-10 12:31:56.272251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:46.936 [2024-07-10 12:31:56.272260] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:46.936 [2024-07-10 12:31:56.272270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:46.936 [2024-07-10 12:31:56.272280] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:46.936 [2024-07-10 12:31:56.272289] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:46.936 [2024-07-10 12:31:56.272299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:46.936 [2024-07-10 12:31:56.272308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:46.936 [2024-07-10 12:31:56.272317] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:46.936 [2024-07-10 12:31:56.272326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:46.936 [2024-07-10 12:31:56.272336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:46.936 [2024-07-10 12:31:56.272346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:46.936 [2024-07-10 12:31:56.272355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:46.936 [2024-07-10 12:31:56.272364] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:46.936 [2024-07-10 12:31:56.272373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:46.936 [2024-07-10 12:31:56.272382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:46.936 [2024-07-10 12:31:56.272391] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:46.936 [2024-07-10 12:31:56.272400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:46.936 [2024-07-10 12:31:56.272409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:46.936 [2024-07-10 12:31:56.272419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:46.936 [2024-07-10 12:31:56.272428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:46.936 [2024-07-10 12:31:56.272437] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:46.936 [2024-07-10 12:31:56.272446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:46.936 [2024-07-10 12:31:56.272456] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:46.936 [2024-07-10 12:31:56.272466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:46.936 [2024-07-10 12:31:56.272475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:46.936 [2024-07-10 12:31:56.272485] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:46.936 [2024-07-10 12:31:56.272494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:46.936 [2024-07-10 12:31:56.272503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:46.936 [2024-07-10 12:31:56.272513] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:46.936 [2024-07-10 12:31:56.272524] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:46.936 [2024-07-10 12:31:56.272535] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:46.936 [2024-07-10 12:31:56.272552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:46.936 [2024-07-10 12:31:56.272564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:46.936 [2024-07-10 12:31:56.272575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:46.936 [2024-07-10 12:31:56.272586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:46.936 [2024-07-10 12:31:56.272598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:46.936 [2024-07-10 12:31:56.272609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:46.936 [2024-07-10 12:31:56.272620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:46.936 [2024-07-10 12:31:56.272631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:46.936 [2024-07-10 12:31:56.272641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:46.936 [2024-07-10 12:31:56.272652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:46.936 [2024-07-10 12:31:56.272662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:46.936 [2024-07-10 12:31:56.272673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:46.936 [2024-07-10 12:31:56.272683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:46.936 [2024-07-10 12:31:56.272694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:46.936 [2024-07-10 12:31:56.272704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:46.936 [2024-07-10 12:31:56.272714] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:46.936 [2024-07-10 12:31:56.272725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:46.936 [2024-07-10 12:31:56.272748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:46.936 [2024-07-10 12:31:56.272759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:46.936 [2024-07-10 12:31:56.272770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:46.936 [2024-07-10 12:31:56.272782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:46.936 [2024-07-10 12:31:56.272792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.936 [2024-07-10 12:31:56.272803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:46.936 [2024-07-10 12:31:56.272812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:29:46.936 [2024-07-10 12:31:56.272822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.936 [2024-07-10 12:31:56.327310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.936 [2024-07-10 12:31:56.327375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:46.936 [2024-07-10 12:31:56.327403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.513 ms 00:29:46.936 [2024-07-10 12:31:56.327415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.936 [2024-07-10 12:31:56.327589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.936 [2024-07-10 12:31:56.327603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:46.936 [2024-07-10 12:31:56.327615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:46.936 [2024-07-10 12:31:56.327631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.936 [2024-07-10 12:31:56.382030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.936 [2024-07-10 12:31:56.382095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:46.936 [2024-07-10 12:31:56.382112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.459 ms 00:29:46.936 [2024-07-10 12:31:56.382123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.936 [2024-07-10 12:31:56.382249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.936 [2024-07-10 12:31:56.382261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:46.936 [2024-07-10 12:31:56.382273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:46.936 [2024-07-10 12:31:56.382283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.936 [2024-07-10 12:31:56.383040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.936 [2024-07-10 12:31:56.383064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:46.936 [2024-07-10 12:31:56.383077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:29:46.936 [2024-07-10 12:31:56.383088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.936 [2024-07-10 12:31:56.383216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.936 [2024-07-10 12:31:56.383234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:46.936 [2024-07-10 12:31:56.383245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:29:46.936 [2024-07-10 12:31:56.383256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.936 [2024-07-10 12:31:56.405958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.936 [2024-07-10 12:31:56.406012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:46.936 [2024-07-10 12:31:56.406027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.713 ms 00:29:46.936 [2024-07-10 12:31:56.406038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.195 [2024-07-10 12:31:56.428186] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:29:47.195 [2024-07-10 12:31:56.428233] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:47.195 [2024-07-10 12:31:56.428252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.195 [2024-07-10 12:31:56.428264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:47.195 [2024-07-10 12:31:56.428278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.091 ms 00:29:47.195 [2024-07-10 12:31:56.428288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.195 [2024-07-10 12:31:56.460079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.195 [2024-07-10 12:31:56.460155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:47.195 [2024-07-10 12:31:56.460172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.736 ms 00:29:47.195 [2024-07-10 12:31:56.460183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.195 [2024-07-10 12:31:56.481752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.195 [2024-07-10 12:31:56.481804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:47.195 [2024-07-10 12:31:56.481820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.489 ms 00:29:47.195 [2024-07-10 12:31:56.481831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.195 [2024-07-10 12:31:56.502529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.195 [2024-07-10 12:31:56.502580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:47.195 [2024-07-10 12:31:56.502595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.637 ms 00:29:47.196 [2024-07-10 12:31:56.502606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.196 [2024-07-10 12:31:56.503499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.196 [2024-07-10 12:31:56.503535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:47.196 [2024-07-10 12:31:56.503549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:29:47.196 [2024-07-10 12:31:56.503559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.196 [2024-07-10 12:31:56.599598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.196 [2024-07-10 12:31:56.599683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:47.196 [2024-07-10 12:31:56.599702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.160 ms 00:29:47.196 [2024-07-10 12:31:56.599713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.196 [2024-07-10 12:31:56.612928] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:47.196 [2024-07-10 12:31:56.638029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.196 [2024-07-10 12:31:56.638093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:47.196 [2024-07-10 12:31:56.638111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.211 ms 00:29:47.196 [2024-07-10 12:31:56.638138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.196 [2024-07-10 12:31:56.638265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.196 [2024-07-10 12:31:56.638281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:47.196 [2024-07-10 12:31:56.638297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:47.196 [2024-07-10 12:31:56.638308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.196 [2024-07-10 12:31:56.638365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.196 [2024-07-10 12:31:56.638378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:47.196 [2024-07-10 12:31:56.638389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:47.196 [2024-07-10 12:31:56.638399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.196 [2024-07-10 12:31:56.638422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.196 [2024-07-10 12:31:56.638433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:47.196 [2024-07-10 12:31:56.638444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:47.196 [2024-07-10 12:31:56.638458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.196 [2024-07-10 12:31:56.638496] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:47.196 [2024-07-10 12:31:56.638508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.196 [2024-07-10 12:31:56.638519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:47.196 [2024-07-10 12:31:56.638530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:47.196 [2024-07-10 12:31:56.638551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.454 [2024-07-10 12:31:56.679972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.454 [2024-07-10 12:31:56.680029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:47.454 [2024-07-10 12:31:56.680053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.465 ms 00:29:47.454 [2024-07-10 12:31:56.680071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.454 [2024-07-10 12:31:56.680218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.454 [2024-07-10 12:31:56.680234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:47.454 [2024-07-10 12:31:56.680246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:47.454 [2024-07-10 12:31:56.680258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.454 [2024-07-10 12:31:56.681352] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:47.454 [2024-07-10 12:31:56.686999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 452.988 ms, result 0 00:29:47.454 [2024-07-10 12:31:56.687919] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:47.454 [2024-07-10 12:31:56.707391] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:47.454  Copying: 4096/4096 [kB] (average 25 MBps)[2024-07-10 12:31:56.867898] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:47.454 [2024-07-10 12:31:56.883635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.455 [2024-07-10 12:31:56.883698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:47.455 [2024-07-10 12:31:56.883715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:47.455 [2024-07-10 12:31:56.883726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.455 [2024-07-10 12:31:56.883765] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:47.455 [2024-07-10 12:31:56.887995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.455 [2024-07-10 12:31:56.888033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:47.455 [2024-07-10 12:31:56.888046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.220 ms 00:29:47.455 [2024-07-10 12:31:56.888057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.455 [2024-07-10 12:31:56.889943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.455 [2024-07-10 12:31:56.889981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:47.455 [2024-07-10 12:31:56.889995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:29:47.455 [2024-07-10 12:31:56.890005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.455 [2024-07-10 12:31:56.893188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.455 [2024-07-10 12:31:56.893220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:47.455 [2024-07-10 12:31:56.893232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.170 ms 00:29:47.455 [2024-07-10 12:31:56.893247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.455 [2024-07-10 12:31:56.898872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.455 [2024-07-10 12:31:56.898903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:47.455 [2024-07-10 12:31:56.898915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.603 ms 00:29:47.455 [2024-07-10 12:31:56.898925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.713 [2024-07-10 12:31:56.938379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.713 [2024-07-10 12:31:56.938416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:47.713 [2024-07-10 12:31:56.938429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.455 ms 00:29:47.713 [2024-07-10 12:31:56.938455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.713 [2024-07-10 12:31:56.960125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.713 [2024-07-10 12:31:56.960161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:47.713 [2024-07-10 12:31:56.960176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.638 ms 00:29:47.713 [2024-07-10 12:31:56.960186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.713 [2024-07-10 12:31:56.960331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.713 [2024-07-10 12:31:56.960345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:47.713 [2024-07-10 12:31:56.960356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:29:47.713 [2024-07-10 12:31:56.960367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.713 [2024-07-10 12:31:56.998816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.713 [2024-07-10 12:31:56.998848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:47.713 [2024-07-10 12:31:56.998861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.494 ms 00:29:47.713 [2024-07-10 12:31:56.998886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.713 [2024-07-10 12:31:57.036640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.713 [2024-07-10 12:31:57.036675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:47.713 [2024-07-10 12:31:57.036688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.761 ms 00:29:47.713 [2024-07-10 12:31:57.036698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.713 [2024-07-10 12:31:57.072866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.713 [2024-07-10 12:31:57.072899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:47.713 [2024-07-10 12:31:57.072912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.160 ms 00:29:47.713 [2024-07-10 12:31:57.072937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.713 [2024-07-10 12:31:57.110133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.713 [2024-07-10 12:31:57.110167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:47.713 [2024-07-10 12:31:57.110179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.176 ms 00:29:47.713 [2024-07-10 12:31:57.110204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.713 [2024-07-10 12:31:57.110257] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:47.713 [2024-07-10 12:31:57.110273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:47.713 [2024-07-10 12:31:57.110293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:47.713 [2024-07-10 12:31:57.110304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:47.713 [2024-07-10 12:31:57.110315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:47.713 [2024-07-10 12:31:57.110326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:47.713 [2024-07-10 12:31:57.110337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:47.713 [2024-07-10 12:31:57.110347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:47.713 [2024-07-10 12:31:57.110358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:47.713 [2024-07-10 12:31:57.110368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:47.713 [2024-07-10 12:31:57.110379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.110988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:47.714 [2024-07-10 12:31:57.111276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:47.715 [2024-07-10 12:31:57.111288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:47.715 [2024-07-10 12:31:57.111299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:47.715 [2024-07-10 12:31:57.111310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:47.715 [2024-07-10 12:31:57.111320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:47.715 [2024-07-10 12:31:57.111330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:47.715 [2024-07-10 12:31:57.111341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:47.715 [2024-07-10 12:31:57.111352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:47.715 [2024-07-10 12:31:57.111362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:47.715 [2024-07-10 12:31:57.111373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:47.715 [2024-07-10 12:31:57.111384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:47.715 [2024-07-10 12:31:57.111401] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:47.715 [2024-07-10 12:31:57.111411] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: be5c4b66-7920-4ed3-b7e3-f52bd4043dbc 00:29:47.715 [2024-07-10 12:31:57.111423] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:47.715 [2024-07-10 12:31:57.111434] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:47.715 [2024-07-10 12:31:57.111453] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:47.715 [2024-07-10 12:31:57.111464] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:47.715 [2024-07-10 12:31:57.111473] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:47.715 [2024-07-10 12:31:57.111484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:47.715 [2024-07-10 12:31:57.111493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:47.715 [2024-07-10 12:31:57.111502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:47.715 [2024-07-10 12:31:57.111511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:47.715 [2024-07-10 12:31:57.111522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.715 [2024-07-10 12:31:57.111532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:47.715 [2024-07-10 12:31:57.111543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:29:47.715 [2024-07-10 12:31:57.111556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.715 [2024-07-10 12:31:57.132806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.715 [2024-07-10 12:31:57.132838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:47.715 [2024-07-10 12:31:57.132850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.264 ms 00:29:47.715 [2024-07-10 12:31:57.132876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.715 [2024-07-10 12:31:57.133345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.715 [2024-07-10 12:31:57.133357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:47.715 [2024-07-10 12:31:57.133374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:29:47.715 [2024-07-10 12:31:57.133384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.715 [2024-07-10 12:31:57.182831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.715 [2024-07-10 12:31:57.182875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:47.715 [2024-07-10 12:31:57.182888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.715 [2024-07-10 12:31:57.182913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.715 [2024-07-10 12:31:57.182980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.715 [2024-07-10 12:31:57.182991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:47.715 [2024-07-10 12:31:57.183007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.715 [2024-07-10 12:31:57.183017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.715 [2024-07-10 12:31:57.183064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.715 [2024-07-10 12:31:57.183077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:47.715 [2024-07-10 12:31:57.183087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.715 [2024-07-10 12:31:57.183096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.715 [2024-07-10 12:31:57.183115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.715 [2024-07-10 12:31:57.183125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:47.715 [2024-07-10 12:31:57.183135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.715 [2024-07-10 12:31:57.183149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.973 [2024-07-10 12:31:57.304346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.973 [2024-07-10 12:31:57.304408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:47.974 [2024-07-10 12:31:57.304424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.974 [2024-07-10 12:31:57.304435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.974 [2024-07-10 12:31:57.409195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.974 [2024-07-10 12:31:57.409269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:47.974 [2024-07-10 12:31:57.409291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.974 [2024-07-10 12:31:57.409303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.974 [2024-07-10 12:31:57.409381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.974 [2024-07-10 12:31:57.409393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:47.974 [2024-07-10 12:31:57.409404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.974 [2024-07-10 12:31:57.409415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.974 [2024-07-10 12:31:57.409446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.974 [2024-07-10 12:31:57.409457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:47.974 [2024-07-10 12:31:57.409468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.974 [2024-07-10 12:31:57.409479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.974 [2024-07-10 12:31:57.409593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.974 [2024-07-10 12:31:57.409606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:47.974 [2024-07-10 12:31:57.409618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.974 [2024-07-10 12:31:57.409629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.974 [2024-07-10 12:31:57.409666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.974 [2024-07-10 12:31:57.409679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:47.974 [2024-07-10 12:31:57.409690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.974 [2024-07-10 12:31:57.409700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.974 [2024-07-10 12:31:57.409766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.974 [2024-07-10 12:31:57.409780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:47.974 [2024-07-10 12:31:57.409791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.974 [2024-07-10 12:31:57.409802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.974 [2024-07-10 12:31:57.409862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:47.974 [2024-07-10 12:31:57.409874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:47.974 [2024-07-10 12:31:57.409884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:47.974 [2024-07-10 12:31:57.409895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.974 [2024-07-10 12:31:57.410041] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 527.253 ms, result 0 00:29:49.354 00:29:49.354 00:29:49.354 12:31:58 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:29:49.354 12:31:58 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=81596 00:29:49.354 12:31:58 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 81596 00:29:49.354 12:31:58 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 81596 ']' 00:29:49.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.354 12:31:58 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.355 12:31:58 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:49.355 12:31:58 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.355 12:31:58 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:49.355 12:31:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:29:49.355 [2024-07-10 12:31:58.773084] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:29:49.355 [2024-07-10 12:31:58.773443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81596 ] 00:29:49.612 [2024-07-10 12:31:58.944750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.870 [2024-07-10 12:31:59.186528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.805 12:32:00 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:50.805 12:32:00 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:29:50.805 12:32:00 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:29:51.064 [2024-07-10 12:32:00.328196] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:51.064 [2024-07-10 12:32:00.328264] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:51.064 [2024-07-10 12:32:00.507533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.064 [2024-07-10 12:32:00.507603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:51.064 [2024-07-10 12:32:00.507619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:51.064 [2024-07-10 12:32:00.507632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.064 [2024-07-10 12:32:00.510829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.064 [2024-07-10 12:32:00.510871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:51.064 [2024-07-10 12:32:00.510885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.182 ms 00:29:51.064 [2024-07-10 12:32:00.510913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.064 [2024-07-10 12:32:00.511040] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:51.064 [2024-07-10 12:32:00.512186] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:51.064 [2024-07-10 12:32:00.512220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.064 [2024-07-10 12:32:00.512235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:51.064 [2024-07-10 12:32:00.512246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.191 ms 00:29:51.064 [2024-07-10 12:32:00.512259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.065 [2024-07-10 12:32:00.513887] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:51.065 [2024-07-10 12:32:00.534616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.065 [2024-07-10 12:32:00.534655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:51.065 [2024-07-10 12:32:00.534673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.757 ms 00:29:51.065 [2024-07-10 12:32:00.534684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.065 [2024-07-10 12:32:00.534796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.065 [2024-07-10 12:32:00.534812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:51.065 [2024-07-10 12:32:00.534827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:51.065 [2024-07-10 12:32:00.534837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.065 [2024-07-10 12:32:00.541564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.065 [2024-07-10 12:32:00.541594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:51.065 [2024-07-10 12:32:00.541615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.684 ms 00:29:51.065 [2024-07-10 12:32:00.541626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.065 [2024-07-10 12:32:00.541766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.065 [2024-07-10 12:32:00.541781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:51.065 [2024-07-10 12:32:00.541796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:29:51.065 [2024-07-10 12:32:00.541806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.065 [2024-07-10 12:32:00.541843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.065 [2024-07-10 12:32:00.541854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:51.065 [2024-07-10 12:32:00.541867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:51.065 [2024-07-10 12:32:00.541877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.065 [2024-07-10 12:32:00.541906] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:51.325 [2024-07-10 12:32:00.547477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.325 [2024-07-10 12:32:00.547512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:51.325 [2024-07-10 12:32:00.547524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.589 ms 00:29:51.325 [2024-07-10 12:32:00.547537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.325 [2024-07-10 12:32:00.547605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.325 [2024-07-10 12:32:00.547623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:51.325 [2024-07-10 12:32:00.547634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:51.325 [2024-07-10 12:32:00.547650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.325 [2024-07-10 12:32:00.547672] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:51.325 [2024-07-10 12:32:00.547698] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:51.325 [2024-07-10 12:32:00.547752] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:51.325 [2024-07-10 12:32:00.547776] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:51.325 [2024-07-10 12:32:00.547860] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:51.325 [2024-07-10 12:32:00.547878] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:51.325 [2024-07-10 12:32:00.547895] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:51.325 [2024-07-10 12:32:00.547910] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:51.325 [2024-07-10 12:32:00.547922] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:51.325 [2024-07-10 12:32:00.547937] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:51.325 [2024-07-10 12:32:00.547947] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:51.325 [2024-07-10 12:32:00.547959] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:51.325 [2024-07-10 12:32:00.547970] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:51.325 [2024-07-10 12:32:00.547985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.325 [2024-07-10 12:32:00.547995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:51.325 [2024-07-10 12:32:00.548009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:29:51.325 [2024-07-10 12:32:00.548019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.325 [2024-07-10 12:32:00.548104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.325 [2024-07-10 12:32:00.548116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:51.325 [2024-07-10 12:32:00.548129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:51.325 [2024-07-10 12:32:00.548139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.325 [2024-07-10 12:32:00.548234] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:51.325 [2024-07-10 12:32:00.548253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:51.325 [2024-07-10 12:32:00.548266] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:51.325 [2024-07-10 12:32:00.548277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:51.325 [2024-07-10 12:32:00.548290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:51.325 [2024-07-10 12:32:00.548299] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:51.325 [2024-07-10 12:32:00.548312] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:51.325 [2024-07-10 12:32:00.548321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:51.325 [2024-07-10 12:32:00.548336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:51.325 [2024-07-10 12:32:00.548345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:51.325 [2024-07-10 12:32:00.548357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:51.325 [2024-07-10 12:32:00.548367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:51.325 [2024-07-10 12:32:00.548380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:51.325 [2024-07-10 12:32:00.548389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:51.325 [2024-07-10 12:32:00.548401] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:51.325 [2024-07-10 12:32:00.548410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:51.325 [2024-07-10 12:32:00.548422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:51.325 [2024-07-10 12:32:00.548432] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:51.325 [2024-07-10 12:32:00.548444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:51.325 [2024-07-10 12:32:00.548453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:51.325 [2024-07-10 12:32:00.548465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:51.325 [2024-07-10 12:32:00.548474] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:51.325 [2024-07-10 12:32:00.548485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:51.325 [2024-07-10 12:32:00.548495] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:51.325 [2024-07-10 12:32:00.548508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:51.325 [2024-07-10 12:32:00.548518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:51.325 [2024-07-10 12:32:00.548529] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:51.325 [2024-07-10 12:32:00.548548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:51.325 [2024-07-10 12:32:00.548560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:51.325 [2024-07-10 12:32:00.548569] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:51.325 [2024-07-10 12:32:00.548582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:51.325 [2024-07-10 12:32:00.548592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:51.325 [2024-07-10 12:32:00.548603] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:51.325 [2024-07-10 12:32:00.548612] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:51.325 [2024-07-10 12:32:00.548624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:51.325 [2024-07-10 12:32:00.548633] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:51.325 [2024-07-10 12:32:00.548644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:51.325 [2024-07-10 12:32:00.548653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:51.325 [2024-07-10 12:32:00.548665] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:51.325 [2024-07-10 12:32:00.548674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:51.325 [2024-07-10 12:32:00.548688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:51.325 [2024-07-10 12:32:00.548697] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:51.326 [2024-07-10 12:32:00.548710] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:51.326 [2024-07-10 12:32:00.548719] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:51.326 [2024-07-10 12:32:00.548745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:51.326 [2024-07-10 12:32:00.548756] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:51.326 [2024-07-10 12:32:00.548768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:51.326 [2024-07-10 12:32:00.548779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:51.326 [2024-07-10 12:32:00.548791] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:51.326 [2024-07-10 12:32:00.548801] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:51.326 [2024-07-10 12:32:00.548812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:51.326 [2024-07-10 12:32:00.548822] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:51.326 [2024-07-10 12:32:00.548833] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:51.326 [2024-07-10 12:32:00.548845] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:51.326 [2024-07-10 12:32:00.548860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:51.326 [2024-07-10 12:32:00.548871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:51.326 [2024-07-10 12:32:00.548888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:51.326 [2024-07-10 12:32:00.548899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:51.326 [2024-07-10 12:32:00.548912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:51.326 [2024-07-10 12:32:00.548922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:51.326 [2024-07-10 12:32:00.548935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:51.326 [2024-07-10 12:32:00.548945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:51.326 [2024-07-10 12:32:00.548958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:51.326 [2024-07-10 12:32:00.548968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:51.326 [2024-07-10 12:32:00.548981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:51.326 [2024-07-10 12:32:00.548992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:51.326 [2024-07-10 12:32:00.549005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:51.326 [2024-07-10 12:32:00.549015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:51.326 [2024-07-10 12:32:00.549027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:51.326 [2024-07-10 12:32:00.549038] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:51.326 [2024-07-10 12:32:00.549051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:51.326 [2024-07-10 12:32:00.549062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:51.326 [2024-07-10 12:32:00.549078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:51.326 [2024-07-10 12:32:00.549088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:51.326 [2024-07-10 12:32:00.549102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:51.326 [2024-07-10 12:32:00.549113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.326 [2024-07-10 12:32:00.549127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:51.326 [2024-07-10 12:32:00.549137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:29:51.326 [2024-07-10 12:32:00.549150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.326 [2024-07-10 12:32:00.594580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.326 [2024-07-10 12:32:00.594634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:51.326 [2024-07-10 12:32:00.594649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.439 ms 00:29:51.326 [2024-07-10 12:32:00.594682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.326 [2024-07-10 12:32:00.594845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.326 [2024-07-10 12:32:00.594883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:51.326 [2024-07-10 12:32:00.594896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:51.326 [2024-07-10 12:32:00.594909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.326 [2024-07-10 12:32:00.646283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.326 [2024-07-10 12:32:00.646342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:51.326 [2024-07-10 12:32:00.646357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.431 ms 00:29:51.326 [2024-07-10 12:32:00.646387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.326 [2024-07-10 12:32:00.646480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.326 [2024-07-10 12:32:00.646506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:51.326 [2024-07-10 12:32:00.646518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:51.326 [2024-07-10 12:32:00.646530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.326 [2024-07-10 12:32:00.646990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.326 [2024-07-10 12:32:00.647007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:51.326 [2024-07-10 12:32:00.647023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:29:51.326 [2024-07-10 12:32:00.647036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.326 [2024-07-10 12:32:00.647158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.326 [2024-07-10 12:32:00.647175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:51.326 [2024-07-10 12:32:00.647186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:29:51.326 [2024-07-10 12:32:00.647199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.326 [2024-07-10 12:32:00.669744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.326 [2024-07-10 12:32:00.669801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:51.326 [2024-07-10 12:32:00.669832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.558 ms 00:29:51.326 [2024-07-10 12:32:00.669847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.326 [2024-07-10 12:32:00.690798] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:51.326 [2024-07-10 12:32:00.690841] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:51.326 [2024-07-10 12:32:00.690857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.326 [2024-07-10 12:32:00.690871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:51.326 [2024-07-10 12:32:00.690883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.911 ms 00:29:51.326 [2024-07-10 12:32:00.690895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.326 [2024-07-10 12:32:00.720868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.326 [2024-07-10 12:32:00.720910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:51.326 [2024-07-10 12:32:00.720925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.939 ms 00:29:51.326 [2024-07-10 12:32:00.720954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.326 [2024-07-10 12:32:00.739910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.326 [2024-07-10 12:32:00.739948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:51.326 [2024-07-10 12:32:00.739988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.905 ms 00:29:51.326 [2024-07-10 12:32:00.740090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.326 [2024-07-10 12:32:00.759430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.326 [2024-07-10 12:32:00.759472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:51.326 [2024-07-10 12:32:00.759487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.297 ms 00:29:51.326 [2024-07-10 12:32:00.759498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.326 [2024-07-10 12:32:00.760314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.326 [2024-07-10 12:32:00.760341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:51.326 [2024-07-10 12:32:00.760353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:29:51.326 [2024-07-10 12:32:00.760366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.586 [2024-07-10 12:32:00.862388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.586 [2024-07-10 12:32:00.862466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:51.586 [2024-07-10 12:32:00.862484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.157 ms 00:29:51.586 [2024-07-10 12:32:00.862498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.586 [2024-07-10 12:32:00.874814] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:51.586 [2024-07-10 12:32:00.891806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.586 [2024-07-10 12:32:00.891863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:51.586 [2024-07-10 12:32:00.891886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.222 ms 00:29:51.586 [2024-07-10 12:32:00.891901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.586 [2024-07-10 12:32:00.892019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.586 [2024-07-10 12:32:00.892031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:51.586 [2024-07-10 12:32:00.892045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:51.586 [2024-07-10 12:32:00.892055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.586 [2024-07-10 12:32:00.892121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.586 [2024-07-10 12:32:00.892133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:51.586 [2024-07-10 12:32:00.892146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:51.586 [2024-07-10 12:32:00.892156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.586 [2024-07-10 12:32:00.892188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.586 [2024-07-10 12:32:00.892199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:51.586 [2024-07-10 12:32:00.892214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:51.586 [2024-07-10 12:32:00.892224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.586 [2024-07-10 12:32:00.892259] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:51.586 [2024-07-10 12:32:00.892270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.586 [2024-07-10 12:32:00.892286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:51.586 [2024-07-10 12:32:00.892296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:51.586 [2024-07-10 12:32:00.892308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.586 [2024-07-10 12:32:00.931594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.586 [2024-07-10 12:32:00.931640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:51.586 [2024-07-10 12:32:00.931654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.324 ms 00:29:51.586 [2024-07-10 12:32:00.931667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.586 [2024-07-10 12:32:00.931808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.586 [2024-07-10 12:32:00.931827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:51.586 [2024-07-10 12:32:00.931838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:51.586 [2024-07-10 12:32:00.931851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.586 [2024-07-10 12:32:00.932865] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:51.586 [2024-07-10 12:32:00.937836] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 425.693 ms, result 0 00:29:51.586 [2024-07-10 12:32:00.938953] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:51.586 Some configs were skipped because the RPC state that can call them passed over. 00:29:51.586 12:32:00 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:29:51.845 [2024-07-10 12:32:01.158483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.845 [2024-07-10 12:32:01.158724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:29:51.845 [2024-07-10 12:32:01.158871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:29:51.845 [2024-07-10 12:32:01.158912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.845 [2024-07-10 12:32:01.158988] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.109 ms, result 0 00:29:51.845 true 00:29:51.845 12:32:01 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:29:52.105 [2024-07-10 12:32:01.325847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.105 [2024-07-10 12:32:01.326052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:29:52.105 [2024-07-10 12:32:01.326130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:29:52.105 [2024-07-10 12:32:01.326169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.105 [2024-07-10 12:32:01.326242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.517 ms, result 0 00:29:52.105 true 00:29:52.105 12:32:01 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 81596 00:29:52.105 12:32:01 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81596 ']' 00:29:52.105 12:32:01 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81596 00:29:52.105 12:32:01 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:29:52.105 12:32:01 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:52.105 12:32:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81596 00:29:52.105 killing process with pid 81596 00:29:52.105 12:32:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:52.105 12:32:01 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:52.105 12:32:01 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81596' 00:29:52.105 12:32:01 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 81596 00:29:52.105 12:32:01 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 81596 00:29:53.482 [2024-07-10 12:32:02.564200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.482 [2024-07-10 12:32:02.564265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:53.482 [2024-07-10 12:32:02.564283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:53.482 [2024-07-10 12:32:02.564294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.482 [2024-07-10 12:32:02.564321] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:53.482 [2024-07-10 12:32:02.568221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.482 [2024-07-10 12:32:02.568260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:53.482 [2024-07-10 12:32:02.568274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.888 ms 00:29:53.482 [2024-07-10 12:32:02.568290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.482 [2024-07-10 12:32:02.568536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.482 [2024-07-10 12:32:02.568552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:53.482 [2024-07-10 12:32:02.568564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:29:53.482 [2024-07-10 12:32:02.568576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.482 [2024-07-10 12:32:02.571763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.482 [2024-07-10 12:32:02.571803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:53.482 [2024-07-10 12:32:02.571818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.155 ms 00:29:53.482 [2024-07-10 12:32:02.571831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.482 [2024-07-10 12:32:02.577503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.482 [2024-07-10 12:32:02.577543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:53.482 [2024-07-10 12:32:02.577556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.644 ms 00:29:53.482 [2024-07-10 12:32:02.577570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.482 [2024-07-10 12:32:02.593135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.482 [2024-07-10 12:32:02.593176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:53.482 [2024-07-10 12:32:02.593191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.535 ms 00:29:53.482 [2024-07-10 12:32:02.593207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.483 [2024-07-10 12:32:02.603375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.483 [2024-07-10 12:32:02.603418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:53.483 [2024-07-10 12:32:02.603435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.127 ms 00:29:53.483 [2024-07-10 12:32:02.603448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.483 [2024-07-10 12:32:02.603596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.483 [2024-07-10 12:32:02.603612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:53.483 [2024-07-10 12:32:02.603623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:29:53.483 [2024-07-10 12:32:02.603649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.483 [2024-07-10 12:32:02.619853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.483 [2024-07-10 12:32:02.619998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:53.483 [2024-07-10 12:32:02.620104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.209 ms 00:29:53.483 [2024-07-10 12:32:02.620146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.483 [2024-07-10 12:32:02.635709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.483 [2024-07-10 12:32:02.635853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:53.483 [2024-07-10 12:32:02.635873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.517 ms 00:29:53.483 [2024-07-10 12:32:02.635891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.483 [2024-07-10 12:32:02.651515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.483 [2024-07-10 12:32:02.651648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:53.483 [2024-07-10 12:32:02.651798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.600 ms 00:29:53.483 [2024-07-10 12:32:02.651839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.483 [2024-07-10 12:32:02.666449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.483 [2024-07-10 12:32:02.666580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:53.483 [2024-07-10 12:32:02.666720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.539 ms 00:29:53.483 [2024-07-10 12:32:02.666775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.483 [2024-07-10 12:32:02.666832] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:53.483 [2024-07-10 12:32:02.666878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.666928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.667039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.667089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.667138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.667186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.667285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.667466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.667518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.667566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.667616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.667663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.667777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.667827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.667877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.667924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.668915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.669984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:53.483 [2024-07-10 12:32:02.670471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:53.484 [2024-07-10 12:32:02.670809] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:53.484 [2024-07-10 12:32:02.670825] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: be5c4b66-7920-4ed3-b7e3-f52bd4043dbc 00:29:53.484 [2024-07-10 12:32:02.670844] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:53.484 [2024-07-10 12:32:02.670854] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:53.484 [2024-07-10 12:32:02.670866] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:53.484 [2024-07-10 12:32:02.670876] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:53.484 [2024-07-10 12:32:02.670888] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:53.484 [2024-07-10 12:32:02.670899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:53.484 [2024-07-10 12:32:02.670911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:53.484 [2024-07-10 12:32:02.670921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:53.484 [2024-07-10 12:32:02.670944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:53.484 [2024-07-10 12:32:02.670954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.484 [2024-07-10 12:32:02.670967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:53.484 [2024-07-10 12:32:02.670978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.130 ms 00:29:53.484 [2024-07-10 12:32:02.670991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.484 [2024-07-10 12:32:02.691447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.484 [2024-07-10 12:32:02.691487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:53.484 [2024-07-10 12:32:02.691500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.450 ms 00:29:53.484 [2024-07-10 12:32:02.691516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.484 [2024-07-10 12:32:02.692098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.484 [2024-07-10 12:32:02.692115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:53.484 [2024-07-10 12:32:02.692130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:29:53.484 [2024-07-10 12:32:02.692145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.484 [2024-07-10 12:32:02.759824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.484 [2024-07-10 12:32:02.759872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:53.484 [2024-07-10 12:32:02.759887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.484 [2024-07-10 12:32:02.759901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.484 [2024-07-10 12:32:02.759997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.484 [2024-07-10 12:32:02.760013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:53.484 [2024-07-10 12:32:02.760026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.484 [2024-07-10 12:32:02.760042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.484 [2024-07-10 12:32:02.760105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.484 [2024-07-10 12:32:02.760122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:53.484 [2024-07-10 12:32:02.760133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.484 [2024-07-10 12:32:02.760148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.484 [2024-07-10 12:32:02.760168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.484 [2024-07-10 12:32:02.760181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:53.484 [2024-07-10 12:32:02.760192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.484 [2024-07-10 12:32:02.760204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.484 [2024-07-10 12:32:02.878031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.484 [2024-07-10 12:32:02.878105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:53.484 [2024-07-10 12:32:02.878123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.484 [2024-07-10 12:32:02.878137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.742 [2024-07-10 12:32:02.979800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.742 [2024-07-10 12:32:02.979877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:53.742 [2024-07-10 12:32:02.979909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.742 [2024-07-10 12:32:02.979923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.742 [2024-07-10 12:32:02.980025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.742 [2024-07-10 12:32:02.980040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:53.742 [2024-07-10 12:32:02.980052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.742 [2024-07-10 12:32:02.980076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.742 [2024-07-10 12:32:02.980123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.742 [2024-07-10 12:32:02.980137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:53.742 [2024-07-10 12:32:02.980147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.742 [2024-07-10 12:32:02.980160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.742 [2024-07-10 12:32:02.980293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.742 [2024-07-10 12:32:02.980310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:53.742 [2024-07-10 12:32:02.980320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.742 [2024-07-10 12:32:02.980333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.742 [2024-07-10 12:32:02.980370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.742 [2024-07-10 12:32:02.980385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:53.742 [2024-07-10 12:32:02.980396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.742 [2024-07-10 12:32:02.980409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.742 [2024-07-10 12:32:02.980451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.742 [2024-07-10 12:32:02.980468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:53.742 [2024-07-10 12:32:02.980479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.742 [2024-07-10 12:32:02.980494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.742 [2024-07-10 12:32:02.980540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.742 [2024-07-10 12:32:02.980554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:53.742 [2024-07-10 12:32:02.980566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.742 [2024-07-10 12:32:02.980578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.742 [2024-07-10 12:32:02.980724] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 417.180 ms, result 0 00:29:54.686 12:32:04 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:54.686 [2024-07-10 12:32:04.118020] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:29:54.686 [2024-07-10 12:32:04.118153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81667 ] 00:29:54.951 [2024-07-10 12:32:04.289631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.208 [2024-07-10 12:32:04.528990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.465 [2024-07-10 12:32:04.924898] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:55.465 [2024-07-10 12:32:04.924965] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:55.724 [2024-07-10 12:32:05.088127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.724 [2024-07-10 12:32:05.088182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:55.724 [2024-07-10 12:32:05.088199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:55.724 [2024-07-10 12:32:05.088210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.724 [2024-07-10 12:32:05.091402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.724 [2024-07-10 12:32:05.091442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:55.724 [2024-07-10 12:32:05.091455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.174 ms 00:29:55.724 [2024-07-10 12:32:05.091465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.724 [2024-07-10 12:32:05.091573] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:55.724 [2024-07-10 12:32:05.092685] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:55.724 [2024-07-10 12:32:05.092721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.724 [2024-07-10 12:32:05.092746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:55.724 [2024-07-10 12:32:05.092759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:29:55.724 [2024-07-10 12:32:05.092770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.724 [2024-07-10 12:32:05.094235] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:55.724 [2024-07-10 12:32:05.114546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.724 [2024-07-10 12:32:05.114584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:55.724 [2024-07-10 12:32:05.114604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.345 ms 00:29:55.724 [2024-07-10 12:32:05.114630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.724 [2024-07-10 12:32:05.114749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.724 [2024-07-10 12:32:05.114765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:55.724 [2024-07-10 12:32:05.114777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:55.724 [2024-07-10 12:32:05.114787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.724 [2024-07-10 12:32:05.121477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.724 [2024-07-10 12:32:05.121509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:55.724 [2024-07-10 12:32:05.121522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.659 ms 00:29:55.724 [2024-07-10 12:32:05.121532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.724 [2024-07-10 12:32:05.121630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.724 [2024-07-10 12:32:05.121644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:55.724 [2024-07-10 12:32:05.121656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:55.724 [2024-07-10 12:32:05.121667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.724 [2024-07-10 12:32:05.121700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.724 [2024-07-10 12:32:05.121712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:55.724 [2024-07-10 12:32:05.121723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:55.724 [2024-07-10 12:32:05.121751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.724 [2024-07-10 12:32:05.121775] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:55.724 [2024-07-10 12:32:05.127180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.724 [2024-07-10 12:32:05.127213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:55.724 [2024-07-10 12:32:05.127226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.420 ms 00:29:55.724 [2024-07-10 12:32:05.127236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.724 [2024-07-10 12:32:05.127304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.724 [2024-07-10 12:32:05.127317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:55.724 [2024-07-10 12:32:05.127328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:55.724 [2024-07-10 12:32:05.127338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.724 [2024-07-10 12:32:05.127358] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:55.724 [2024-07-10 12:32:05.127382] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:55.724 [2024-07-10 12:32:05.127422] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:55.724 [2024-07-10 12:32:05.127439] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:55.724 [2024-07-10 12:32:05.127523] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:55.724 [2024-07-10 12:32:05.127536] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:55.724 [2024-07-10 12:32:05.127550] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:55.724 [2024-07-10 12:32:05.127563] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:55.724 [2024-07-10 12:32:05.127575] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:55.724 [2024-07-10 12:32:05.127587] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:55.724 [2024-07-10 12:32:05.127600] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:55.724 [2024-07-10 12:32:05.127610] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:55.724 [2024-07-10 12:32:05.127621] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:55.724 [2024-07-10 12:32:05.127632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.724 [2024-07-10 12:32:05.127642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:55.724 [2024-07-10 12:32:05.127653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:29:55.724 [2024-07-10 12:32:05.127662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.724 [2024-07-10 12:32:05.127749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.725 [2024-07-10 12:32:05.127762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:55.725 [2024-07-10 12:32:05.127773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:55.725 [2024-07-10 12:32:05.127786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.725 [2024-07-10 12:32:05.127881] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:55.725 [2024-07-10 12:32:05.127894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:55.725 [2024-07-10 12:32:05.127905] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:55.725 [2024-07-10 12:32:05.127916] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.725 [2024-07-10 12:32:05.127926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:55.725 [2024-07-10 12:32:05.127935] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:55.725 [2024-07-10 12:32:05.127945] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:55.725 [2024-07-10 12:32:05.127955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:55.725 [2024-07-10 12:32:05.127964] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:55.725 [2024-07-10 12:32:05.127974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:55.725 [2024-07-10 12:32:05.127983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:55.725 [2024-07-10 12:32:05.127992] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:55.725 [2024-07-10 12:32:05.128002] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:55.725 [2024-07-10 12:32:05.128011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:55.725 [2024-07-10 12:32:05.128021] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:55.725 [2024-07-10 12:32:05.128031] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.725 [2024-07-10 12:32:05.128040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:55.725 [2024-07-10 12:32:05.128049] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:55.725 [2024-07-10 12:32:05.128078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.725 [2024-07-10 12:32:05.128087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:55.725 [2024-07-10 12:32:05.128096] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:55.725 [2024-07-10 12:32:05.128106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.725 [2024-07-10 12:32:05.128115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:55.725 [2024-07-10 12:32:05.128124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:55.725 [2024-07-10 12:32:05.128133] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.725 [2024-07-10 12:32:05.128143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:55.725 [2024-07-10 12:32:05.128151] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:55.725 [2024-07-10 12:32:05.128161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.725 [2024-07-10 12:32:05.128170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:55.725 [2024-07-10 12:32:05.128179] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:55.725 [2024-07-10 12:32:05.128189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.725 [2024-07-10 12:32:05.128198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:55.725 [2024-07-10 12:32:05.128207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:55.725 [2024-07-10 12:32:05.128216] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:55.725 [2024-07-10 12:32:05.128225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:55.725 [2024-07-10 12:32:05.128235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:55.725 [2024-07-10 12:32:05.128244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:55.725 [2024-07-10 12:32:05.128253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:55.725 [2024-07-10 12:32:05.128262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:55.725 [2024-07-10 12:32:05.128271] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.725 [2024-07-10 12:32:05.128281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:55.725 [2024-07-10 12:32:05.128290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:55.725 [2024-07-10 12:32:05.128298] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.725 [2024-07-10 12:32:05.128307] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:55.725 [2024-07-10 12:32:05.128320] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:55.725 [2024-07-10 12:32:05.128330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:55.725 [2024-07-10 12:32:05.128339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.725 [2024-07-10 12:32:05.128349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:55.725 [2024-07-10 12:32:05.128358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:55.725 [2024-07-10 12:32:05.128368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:55.725 [2024-07-10 12:32:05.128377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:55.725 [2024-07-10 12:32:05.128387] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:55.725 [2024-07-10 12:32:05.128396] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:55.725 [2024-07-10 12:32:05.128406] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:55.725 [2024-07-10 12:32:05.128422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:55.725 [2024-07-10 12:32:05.128434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:55.725 [2024-07-10 12:32:05.128445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:55.725 [2024-07-10 12:32:05.128456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:55.725 [2024-07-10 12:32:05.128466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:55.725 [2024-07-10 12:32:05.128477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:55.725 [2024-07-10 12:32:05.128487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:55.725 [2024-07-10 12:32:05.128498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:55.725 [2024-07-10 12:32:05.128508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:55.725 [2024-07-10 12:32:05.128519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:55.725 [2024-07-10 12:32:05.128529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:55.725 [2024-07-10 12:32:05.128540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:55.725 [2024-07-10 12:32:05.128550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:55.725 [2024-07-10 12:32:05.128560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:55.725 [2024-07-10 12:32:05.128571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:55.725 [2024-07-10 12:32:05.128582] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:55.725 [2024-07-10 12:32:05.128592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:55.725 [2024-07-10 12:32:05.128604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:55.725 [2024-07-10 12:32:05.128614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:55.725 [2024-07-10 12:32:05.128624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:55.725 [2024-07-10 12:32:05.128634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:55.725 [2024-07-10 12:32:05.128645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.725 [2024-07-10 12:32:05.128655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:55.725 [2024-07-10 12:32:05.128666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:29:55.725 [2024-07-10 12:32:05.128676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.725 [2024-07-10 12:32:05.194132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.725 [2024-07-10 12:32:05.194196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:55.725 [2024-07-10 12:32:05.194213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.492 ms 00:29:55.725 [2024-07-10 12:32:05.194225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.725 [2024-07-10 12:32:05.194415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.725 [2024-07-10 12:32:05.194428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:55.725 [2024-07-10 12:32:05.194440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:55.725 [2024-07-10 12:32:05.194455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.984 [2024-07-10 12:32:05.245237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.984 [2024-07-10 12:32:05.245294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:55.984 [2024-07-10 12:32:05.245310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.836 ms 00:29:55.984 [2024-07-10 12:32:05.245338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.984 [2024-07-10 12:32:05.245456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.984 [2024-07-10 12:32:05.245469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:55.984 [2024-07-10 12:32:05.245481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:55.984 [2024-07-10 12:32:05.245492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.984 [2024-07-10 12:32:05.245949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.984 [2024-07-10 12:32:05.245964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:55.984 [2024-07-10 12:32:05.245976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:29:55.984 [2024-07-10 12:32:05.245987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.984 [2024-07-10 12:32:05.246113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.984 [2024-07-10 12:32:05.246129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:55.984 [2024-07-10 12:32:05.246141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:29:55.984 [2024-07-10 12:32:05.246152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.984 [2024-07-10 12:32:05.267799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.984 [2024-07-10 12:32:05.267860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:55.984 [2024-07-10 12:32:05.267877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.655 ms 00:29:55.984 [2024-07-10 12:32:05.267888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.984 [2024-07-10 12:32:05.288646] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:55.984 [2024-07-10 12:32:05.288690] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:55.984 [2024-07-10 12:32:05.288706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.984 [2024-07-10 12:32:05.288718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:55.984 [2024-07-10 12:32:05.288742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.686 ms 00:29:55.984 [2024-07-10 12:32:05.288753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.984 [2024-07-10 12:32:05.318951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.984 [2024-07-10 12:32:05.319000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:55.984 [2024-07-10 12:32:05.319017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.156 ms 00:29:55.984 [2024-07-10 12:32:05.319028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.984 [2024-07-10 12:32:05.337840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.984 [2024-07-10 12:32:05.337882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:55.984 [2024-07-10 12:32:05.337897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.747 ms 00:29:55.984 [2024-07-10 12:32:05.337907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.984 [2024-07-10 12:32:05.356824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.984 [2024-07-10 12:32:05.356864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:55.984 [2024-07-10 12:32:05.356878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.865 ms 00:29:55.984 [2024-07-10 12:32:05.356888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.984 [2024-07-10 12:32:05.357749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.984 [2024-07-10 12:32:05.357778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:55.984 [2024-07-10 12:32:05.357791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:29:55.984 [2024-07-10 12:32:05.357802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.984 [2024-07-10 12:32:05.446250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.984 [2024-07-10 12:32:05.446333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:55.984 [2024-07-10 12:32:05.446350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.558 ms 00:29:55.984 [2024-07-10 12:32:05.446361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.984 [2024-07-10 12:32:05.458460] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:56.243 [2024-07-10 12:32:05.475722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.243 [2024-07-10 12:32:05.475789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:56.243 [2024-07-10 12:32:05.475807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.269 ms 00:29:56.243 [2024-07-10 12:32:05.475834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.243 [2024-07-10 12:32:05.475955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.243 [2024-07-10 12:32:05.475969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:56.243 [2024-07-10 12:32:05.475986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:56.243 [2024-07-10 12:32:05.475998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.243 [2024-07-10 12:32:05.476057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.243 [2024-07-10 12:32:05.476080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:56.243 [2024-07-10 12:32:05.476092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:56.243 [2024-07-10 12:32:05.476101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.243 [2024-07-10 12:32:05.476129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.243 [2024-07-10 12:32:05.476140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:56.243 [2024-07-10 12:32:05.476151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:56.243 [2024-07-10 12:32:05.476164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.243 [2024-07-10 12:32:05.476197] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:56.243 [2024-07-10 12:32:05.476209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.243 [2024-07-10 12:32:05.476219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:56.243 [2024-07-10 12:32:05.476229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:56.243 [2024-07-10 12:32:05.476240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.243 [2024-07-10 12:32:05.514622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.243 [2024-07-10 12:32:05.514803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:56.243 [2024-07-10 12:32:05.514921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.416 ms 00:29:56.243 [2024-07-10 12:32:05.514961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.243 [2024-07-10 12:32:05.515098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.243 [2024-07-10 12:32:05.515255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:56.243 [2024-07-10 12:32:05.515332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:56.243 [2024-07-10 12:32:05.515362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.243 [2024-07-10 12:32:05.516411] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:56.243 [2024-07-10 12:32:05.521603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 428.679 ms, result 0 00:29:56.243 [2024-07-10 12:32:05.522584] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:56.243 [2024-07-10 12:32:05.541444] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:06.028  Copying: 29/256 [MB] (29 MBps) Copying: 55/256 [MB] (25 MBps) Copying: 81/256 [MB] (26 MBps) Copying: 108/256 [MB] (27 MBps) Copying: 136/256 [MB] (27 MBps) Copying: 163/256 [MB] (26 MBps) Copying: 189/256 [MB] (26 MBps) Copying: 216/256 [MB] (27 MBps) Copying: 243/256 [MB] (26 MBps) Copying: 256/256 [MB] (average 27 MBps)[2024-07-10 12:32:15.417710] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:06.028 [2024-07-10 12:32:15.435294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.028 [2024-07-10 12:32:15.435345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:06.028 [2024-07-10 12:32:15.435363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:06.028 [2024-07-10 12:32:15.435374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.028 [2024-07-10 12:32:15.435403] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:30:06.028 [2024-07-10 12:32:15.439243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.028 [2024-07-10 12:32:15.439289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:06.028 [2024-07-10 12:32:15.439302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.826 ms 00:30:06.028 [2024-07-10 12:32:15.439313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.028 [2024-07-10 12:32:15.439562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.028 [2024-07-10 12:32:15.439575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:06.028 [2024-07-10 12:32:15.439587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:30:06.028 [2024-07-10 12:32:15.439598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.028 [2024-07-10 12:32:15.442770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.028 [2024-07-10 12:32:15.442810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:06.028 [2024-07-10 12:32:15.442827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.156 ms 00:30:06.028 [2024-07-10 12:32:15.442850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.028 [2024-07-10 12:32:15.449188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.028 [2024-07-10 12:32:15.449231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:06.028 [2024-07-10 12:32:15.449245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.304 ms 00:30:06.028 [2024-07-10 12:32:15.449255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.028 [2024-07-10 12:32:15.489130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.028 [2024-07-10 12:32:15.489174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:06.028 [2024-07-10 12:32:15.489189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.850 ms 00:30:06.028 [2024-07-10 12:32:15.489200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.288 [2024-07-10 12:32:15.511150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.288 [2024-07-10 12:32:15.511193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:06.288 [2024-07-10 12:32:15.511208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.921 ms 00:30:06.288 [2024-07-10 12:32:15.511219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.288 [2024-07-10 12:32:15.511376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.288 [2024-07-10 12:32:15.511390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:06.288 [2024-07-10 12:32:15.511403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:30:06.288 [2024-07-10 12:32:15.511413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.288 [2024-07-10 12:32:15.551071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.288 [2024-07-10 12:32:15.551112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:06.288 [2024-07-10 12:32:15.551127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.703 ms 00:30:06.288 [2024-07-10 12:32:15.551137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.288 [2024-07-10 12:32:15.589082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.288 [2024-07-10 12:32:15.589120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:06.288 [2024-07-10 12:32:15.589133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.947 ms 00:30:06.288 [2024-07-10 12:32:15.589143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.288 [2024-07-10 12:32:15.627081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.288 [2024-07-10 12:32:15.627117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:06.288 [2024-07-10 12:32:15.627130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.943 ms 00:30:06.288 [2024-07-10 12:32:15.627139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.288 [2024-07-10 12:32:15.665565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.288 [2024-07-10 12:32:15.665601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:06.288 [2024-07-10 12:32:15.665614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.390 ms 00:30:06.288 [2024-07-10 12:32:15.665624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.288 [2024-07-10 12:32:15.665677] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:06.288 [2024-07-10 12:32:15.665694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.665996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:06.288 [2024-07-10 12:32:15.666241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:06.289 [2024-07-10 12:32:15.666821] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:06.289 [2024-07-10 12:32:15.666831] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: be5c4b66-7920-4ed3-b7e3-f52bd4043dbc 00:30:06.289 [2024-07-10 12:32:15.666843] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:06.289 [2024-07-10 12:32:15.666853] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:06.289 [2024-07-10 12:32:15.666874] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:06.289 [2024-07-10 12:32:15.666885] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:06.289 [2024-07-10 12:32:15.666895] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:06.289 [2024-07-10 12:32:15.666905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:06.289 [2024-07-10 12:32:15.666915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:06.289 [2024-07-10 12:32:15.666924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:06.289 [2024-07-10 12:32:15.666933] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:06.289 [2024-07-10 12:32:15.666943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.289 [2024-07-10 12:32:15.666954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:06.289 [2024-07-10 12:32:15.666965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.270 ms 00:30:06.289 [2024-07-10 12:32:15.666979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.289 [2024-07-10 12:32:15.686778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.289 [2024-07-10 12:32:15.686810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:06.289 [2024-07-10 12:32:15.686822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.809 ms 00:30:06.289 [2024-07-10 12:32:15.686848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.289 [2024-07-10 12:32:15.687361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.289 [2024-07-10 12:32:15.687376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:06.289 [2024-07-10 12:32:15.687393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:30:06.289 [2024-07-10 12:32:15.687403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.289 [2024-07-10 12:32:15.735039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.289 [2024-07-10 12:32:15.735076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:06.289 [2024-07-10 12:32:15.735090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.289 [2024-07-10 12:32:15.735101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.289 [2024-07-10 12:32:15.735174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.289 [2024-07-10 12:32:15.735186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:06.289 [2024-07-10 12:32:15.735203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.289 [2024-07-10 12:32:15.735214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.289 [2024-07-10 12:32:15.735264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.289 [2024-07-10 12:32:15.735276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:06.289 [2024-07-10 12:32:15.735287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.289 [2024-07-10 12:32:15.735297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.289 [2024-07-10 12:32:15.735317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.289 [2024-07-10 12:32:15.735327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:06.289 [2024-07-10 12:32:15.735338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.289 [2024-07-10 12:32:15.735353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.549 [2024-07-10 12:32:15.855965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.549 [2024-07-10 12:32:15.856026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:06.549 [2024-07-10 12:32:15.856044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.549 [2024-07-10 12:32:15.856055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.549 [2024-07-10 12:32:15.958615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.549 [2024-07-10 12:32:15.958673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:06.549 [2024-07-10 12:32:15.958690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.549 [2024-07-10 12:32:15.958707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.549 [2024-07-10 12:32:15.958806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.549 [2024-07-10 12:32:15.958819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:06.549 [2024-07-10 12:32:15.958831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.549 [2024-07-10 12:32:15.958841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.549 [2024-07-10 12:32:15.958872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.549 [2024-07-10 12:32:15.958882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:06.549 [2024-07-10 12:32:15.958893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.549 [2024-07-10 12:32:15.958903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.549 [2024-07-10 12:32:15.959030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.549 [2024-07-10 12:32:15.959043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:06.549 [2024-07-10 12:32:15.959054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.549 [2024-07-10 12:32:15.959063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.549 [2024-07-10 12:32:15.959100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.549 [2024-07-10 12:32:15.959112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:06.549 [2024-07-10 12:32:15.959122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.549 [2024-07-10 12:32:15.959143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.549 [2024-07-10 12:32:15.959191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.549 [2024-07-10 12:32:15.959202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:06.549 [2024-07-10 12:32:15.959213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.549 [2024-07-10 12:32:15.959222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.549 [2024-07-10 12:32:15.959270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.549 [2024-07-10 12:32:15.959280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:06.549 [2024-07-10 12:32:15.959291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.549 [2024-07-10 12:32:15.959302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.549 [2024-07-10 12:32:15.959454] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 525.013 ms, result 0 00:30:07.927 00:30:07.927 00:30:07.927 12:32:17 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:08.185 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:30:08.185 12:32:17 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:30:08.185 12:32:17 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:30:08.185 12:32:17 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:08.443 12:32:17 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:08.443 12:32:17 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:30:08.443 12:32:17 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:30:08.443 12:32:17 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 81596 00:30:08.443 12:32:17 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 81596 ']' 00:30:08.443 12:32:17 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 81596 00:30:08.443 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (81596) - No such process 00:30:08.443 Process with pid 81596 is not found 00:30:08.443 12:32:17 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 81596 is not found' 00:30:08.443 ************************************ 00:30:08.443 END TEST ftl_trim 00:30:08.443 ************************************ 00:30:08.443 00:30:08.443 real 1m9.935s 00:30:08.443 user 1m34.665s 00:30:08.443 sys 0m6.697s 00:30:08.443 12:32:17 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:08.443 12:32:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:30:08.443 12:32:17 ftl -- common/autotest_common.sh@1142 -- # return 0 00:30:08.443 12:32:17 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:30:08.443 12:32:17 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:30:08.443 12:32:17 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:08.443 12:32:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:08.443 ************************************ 00:30:08.443 START TEST ftl_restore 00:30:08.443 ************************************ 00:30:08.443 12:32:17 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:30:08.754 * Looking for test storage... 00:30:08.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:08.754 12:32:17 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:08.754 12:32:18 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:08.754 12:32:18 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:30:08.754 12:32:18 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.VbXVX5Axup 00:30:08.754 12:32:18 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:08.754 12:32:18 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:30:08.754 12:32:18 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:08.754 12:32:18 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:08.754 12:32:18 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:30:08.754 12:32:18 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:08.754 12:32:18 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:30:08.754 12:32:18 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:08.754 12:32:18 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.754 12:32:18 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=81863 00:30:08.754 12:32:18 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 81863 00:30:08.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.754 12:32:18 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 81863 ']' 00:30:08.754 12:32:18 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.754 12:32:18 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:08.754 12:32:18 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.754 12:32:18 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:08.754 12:32:18 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:30:08.754 [2024-07-10 12:32:18.139087] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:30:08.754 [2024-07-10 12:32:18.139303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81863 ] 00:30:09.013 [2024-07-10 12:32:18.328334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.271 [2024-07-10 12:32:18.569922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.202 12:32:19 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:10.202 12:32:19 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:30:10.202 12:32:19 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:10.202 12:32:19 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:30:10.202 12:32:19 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:10.202 12:32:19 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:30:10.202 12:32:19 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:30:10.202 12:32:19 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:10.461 12:32:19 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:10.461 12:32:19 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:30:10.461 12:32:19 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:10.461 12:32:19 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:30:10.461 12:32:19 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:10.461 12:32:19 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:30:10.461 12:32:19 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:30:10.461 12:32:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:10.721 12:32:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:10.721 { 00:30:10.721 "name": "nvme0n1", 00:30:10.721 "aliases": [ 00:30:10.721 "1a3b5de1-9382-413e-8b19-de271a706fff" 00:30:10.721 ], 00:30:10.721 "product_name": "NVMe disk", 00:30:10.721 "block_size": 4096, 00:30:10.721 "num_blocks": 1310720, 00:30:10.721 "uuid": "1a3b5de1-9382-413e-8b19-de271a706fff", 00:30:10.721 "assigned_rate_limits": { 00:30:10.721 "rw_ios_per_sec": 0, 00:30:10.721 "rw_mbytes_per_sec": 0, 00:30:10.721 "r_mbytes_per_sec": 0, 00:30:10.721 "w_mbytes_per_sec": 0 00:30:10.721 }, 00:30:10.721 "claimed": true, 00:30:10.721 "claim_type": "read_many_write_one", 00:30:10.721 "zoned": false, 00:30:10.721 "supported_io_types": { 00:30:10.721 "read": true, 00:30:10.721 "write": true, 00:30:10.721 "unmap": true, 00:30:10.721 "flush": true, 00:30:10.721 "reset": true, 00:30:10.721 "nvme_admin": true, 00:30:10.721 "nvme_io": true, 00:30:10.721 "nvme_io_md": false, 00:30:10.721 "write_zeroes": true, 00:30:10.721 "zcopy": false, 00:30:10.721 "get_zone_info": false, 00:30:10.721 "zone_management": false, 00:30:10.721 "zone_append": false, 00:30:10.721 "compare": true, 00:30:10.721 "compare_and_write": false, 00:30:10.721 "abort": true, 00:30:10.721 "seek_hole": false, 00:30:10.721 "seek_data": false, 00:30:10.721 "copy": true, 00:30:10.721 "nvme_iov_md": false 00:30:10.721 }, 00:30:10.721 "driver_specific": { 00:30:10.721 "nvme": [ 00:30:10.721 { 00:30:10.721 "pci_address": "0000:00:11.0", 00:30:10.721 "trid": { 00:30:10.721 "trtype": "PCIe", 00:30:10.721 "traddr": "0000:00:11.0" 00:30:10.721 }, 00:30:10.721 "ctrlr_data": { 00:30:10.721 "cntlid": 0, 00:30:10.721 "vendor_id": "0x1b36", 00:30:10.721 "model_number": "QEMU NVMe Ctrl", 00:30:10.721 "serial_number": "12341", 00:30:10.721 "firmware_revision": "8.0.0", 00:30:10.721 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:10.721 "oacs": { 00:30:10.721 "security": 0, 00:30:10.721 "format": 1, 00:30:10.721 "firmware": 0, 00:30:10.721 "ns_manage": 1 00:30:10.721 }, 00:30:10.721 "multi_ctrlr": false, 00:30:10.721 "ana_reporting": false 00:30:10.721 }, 00:30:10.721 "vs": { 00:30:10.721 "nvme_version": "1.4" 00:30:10.721 }, 00:30:10.721 "ns_data": { 00:30:10.721 "id": 1, 00:30:10.721 "can_share": false 00:30:10.721 } 00:30:10.721 } 00:30:10.721 ], 00:30:10.721 "mp_policy": "active_passive" 00:30:10.721 } 00:30:10.721 } 00:30:10.721 ]' 00:30:10.721 12:32:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:10.721 12:32:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:30:10.721 12:32:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:10.721 12:32:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:30:10.721 12:32:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:30:10.721 12:32:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:30:10.721 12:32:20 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:30:10.721 12:32:20 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:10.721 12:32:20 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:30:10.721 12:32:20 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:10.721 12:32:20 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:10.991 12:32:20 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=65f5c800-8d08-4229-b5ae-86f119fe0c55 00:30:10.991 12:32:20 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:30:10.991 12:32:20 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 65f5c800-8d08-4229-b5ae-86f119fe0c55 00:30:11.291 12:32:20 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:11.291 12:32:20 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=db2d9183-054a-4dab-8d5a-78e0f2ba64dc 00:30:11.291 12:32:20 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u db2d9183-054a-4dab-8d5a-78e0f2ba64dc 00:30:11.549 12:32:20 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=bb81b00c-559e-48e5-8b3f-1c3be9e26c09 00:30:11.549 12:32:20 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:11.549 12:32:20 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 bb81b00c-559e-48e5-8b3f-1c3be9e26c09 00:30:11.549 12:32:20 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:30:11.549 12:32:20 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:11.549 12:32:20 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=bb81b00c-559e-48e5-8b3f-1c3be9e26c09 00:30:11.549 12:32:20 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:30:11.549 12:32:20 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size bb81b00c-559e-48e5-8b3f-1c3be9e26c09 00:30:11.549 12:32:20 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=bb81b00c-559e-48e5-8b3f-1c3be9e26c09 00:30:11.549 12:32:20 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:11.549 12:32:20 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:30:11.549 12:32:20 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:30:11.550 12:32:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bb81b00c-559e-48e5-8b3f-1c3be9e26c09 00:30:11.809 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:11.809 { 00:30:11.809 "name": "bb81b00c-559e-48e5-8b3f-1c3be9e26c09", 00:30:11.809 "aliases": [ 00:30:11.809 "lvs/nvme0n1p0" 00:30:11.809 ], 00:30:11.809 "product_name": "Logical Volume", 00:30:11.809 "block_size": 4096, 00:30:11.809 "num_blocks": 26476544, 00:30:11.809 "uuid": "bb81b00c-559e-48e5-8b3f-1c3be9e26c09", 00:30:11.809 "assigned_rate_limits": { 00:30:11.809 "rw_ios_per_sec": 0, 00:30:11.809 "rw_mbytes_per_sec": 0, 00:30:11.809 "r_mbytes_per_sec": 0, 00:30:11.809 "w_mbytes_per_sec": 0 00:30:11.809 }, 00:30:11.809 "claimed": false, 00:30:11.809 "zoned": false, 00:30:11.809 "supported_io_types": { 00:30:11.809 "read": true, 00:30:11.809 "write": true, 00:30:11.809 "unmap": true, 00:30:11.809 "flush": false, 00:30:11.809 "reset": true, 00:30:11.809 "nvme_admin": false, 00:30:11.809 "nvme_io": false, 00:30:11.809 "nvme_io_md": false, 00:30:11.809 "write_zeroes": true, 00:30:11.809 "zcopy": false, 00:30:11.809 "get_zone_info": false, 00:30:11.809 "zone_management": false, 00:30:11.809 "zone_append": false, 00:30:11.809 "compare": false, 00:30:11.809 "compare_and_write": false, 00:30:11.809 "abort": false, 00:30:11.809 "seek_hole": true, 00:30:11.809 "seek_data": true, 00:30:11.809 "copy": false, 00:30:11.809 "nvme_iov_md": false 00:30:11.809 }, 00:30:11.809 "driver_specific": { 00:30:11.809 "lvol": { 00:30:11.809 "lvol_store_uuid": "db2d9183-054a-4dab-8d5a-78e0f2ba64dc", 00:30:11.809 "base_bdev": "nvme0n1", 00:30:11.809 "thin_provision": true, 00:30:11.809 "num_allocated_clusters": 0, 00:30:11.809 "snapshot": false, 00:30:11.809 "clone": false, 00:30:11.809 "esnap_clone": false 00:30:11.809 } 00:30:11.809 } 00:30:11.809 } 00:30:11.809 ]' 00:30:11.809 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:11.809 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:30:11.809 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:11.809 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:11.809 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:11.809 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:30:11.809 12:32:21 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:30:11.809 12:32:21 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:30:11.809 12:32:21 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:12.068 12:32:21 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:12.068 12:32:21 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:12.068 12:32:21 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size bb81b00c-559e-48e5-8b3f-1c3be9e26c09 00:30:12.068 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=bb81b00c-559e-48e5-8b3f-1c3be9e26c09 00:30:12.068 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:12.068 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:30:12.068 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:30:12.068 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bb81b00c-559e-48e5-8b3f-1c3be9e26c09 00:30:12.326 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:12.326 { 00:30:12.326 "name": "bb81b00c-559e-48e5-8b3f-1c3be9e26c09", 00:30:12.326 "aliases": [ 00:30:12.326 "lvs/nvme0n1p0" 00:30:12.326 ], 00:30:12.326 "product_name": "Logical Volume", 00:30:12.326 "block_size": 4096, 00:30:12.326 "num_blocks": 26476544, 00:30:12.326 "uuid": "bb81b00c-559e-48e5-8b3f-1c3be9e26c09", 00:30:12.326 "assigned_rate_limits": { 00:30:12.326 "rw_ios_per_sec": 0, 00:30:12.326 "rw_mbytes_per_sec": 0, 00:30:12.326 "r_mbytes_per_sec": 0, 00:30:12.326 "w_mbytes_per_sec": 0 00:30:12.326 }, 00:30:12.326 "claimed": false, 00:30:12.326 "zoned": false, 00:30:12.326 "supported_io_types": { 00:30:12.326 "read": true, 00:30:12.326 "write": true, 00:30:12.326 "unmap": true, 00:30:12.326 "flush": false, 00:30:12.326 "reset": true, 00:30:12.326 "nvme_admin": false, 00:30:12.326 "nvme_io": false, 00:30:12.326 "nvme_io_md": false, 00:30:12.326 "write_zeroes": true, 00:30:12.326 "zcopy": false, 00:30:12.326 "get_zone_info": false, 00:30:12.326 "zone_management": false, 00:30:12.326 "zone_append": false, 00:30:12.326 "compare": false, 00:30:12.326 "compare_and_write": false, 00:30:12.326 "abort": false, 00:30:12.326 "seek_hole": true, 00:30:12.326 "seek_data": true, 00:30:12.326 "copy": false, 00:30:12.326 "nvme_iov_md": false 00:30:12.326 }, 00:30:12.326 "driver_specific": { 00:30:12.326 "lvol": { 00:30:12.326 "lvol_store_uuid": "db2d9183-054a-4dab-8d5a-78e0f2ba64dc", 00:30:12.326 "base_bdev": "nvme0n1", 00:30:12.326 "thin_provision": true, 00:30:12.326 "num_allocated_clusters": 0, 00:30:12.326 "snapshot": false, 00:30:12.326 "clone": false, 00:30:12.326 "esnap_clone": false 00:30:12.326 } 00:30:12.326 } 00:30:12.326 } 00:30:12.326 ]' 00:30:12.326 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:12.326 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:30:12.326 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:12.326 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:12.326 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:12.326 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:30:12.326 12:32:21 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:30:12.326 12:32:21 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:12.584 12:32:21 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:12.584 12:32:21 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size bb81b00c-559e-48e5-8b3f-1c3be9e26c09 00:30:12.584 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=bb81b00c-559e-48e5-8b3f-1c3be9e26c09 00:30:12.584 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:12.584 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:30:12.584 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:30:12.584 12:32:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bb81b00c-559e-48e5-8b3f-1c3be9e26c09 00:30:12.584 12:32:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:12.584 { 00:30:12.584 "name": "bb81b00c-559e-48e5-8b3f-1c3be9e26c09", 00:30:12.584 "aliases": [ 00:30:12.584 "lvs/nvme0n1p0" 00:30:12.584 ], 00:30:12.584 "product_name": "Logical Volume", 00:30:12.584 "block_size": 4096, 00:30:12.584 "num_blocks": 26476544, 00:30:12.584 "uuid": "bb81b00c-559e-48e5-8b3f-1c3be9e26c09", 00:30:12.584 "assigned_rate_limits": { 00:30:12.584 "rw_ios_per_sec": 0, 00:30:12.584 "rw_mbytes_per_sec": 0, 00:30:12.584 "r_mbytes_per_sec": 0, 00:30:12.584 "w_mbytes_per_sec": 0 00:30:12.584 }, 00:30:12.584 "claimed": false, 00:30:12.584 "zoned": false, 00:30:12.584 "supported_io_types": { 00:30:12.584 "read": true, 00:30:12.584 "write": true, 00:30:12.584 "unmap": true, 00:30:12.584 "flush": false, 00:30:12.584 "reset": true, 00:30:12.584 "nvme_admin": false, 00:30:12.584 "nvme_io": false, 00:30:12.584 "nvme_io_md": false, 00:30:12.584 "write_zeroes": true, 00:30:12.584 "zcopy": false, 00:30:12.584 "get_zone_info": false, 00:30:12.584 "zone_management": false, 00:30:12.584 "zone_append": false, 00:30:12.584 "compare": false, 00:30:12.584 "compare_and_write": false, 00:30:12.584 "abort": false, 00:30:12.584 "seek_hole": true, 00:30:12.584 "seek_data": true, 00:30:12.584 "copy": false, 00:30:12.584 "nvme_iov_md": false 00:30:12.584 }, 00:30:12.584 "driver_specific": { 00:30:12.584 "lvol": { 00:30:12.584 "lvol_store_uuid": "db2d9183-054a-4dab-8d5a-78e0f2ba64dc", 00:30:12.584 "base_bdev": "nvme0n1", 00:30:12.584 "thin_provision": true, 00:30:12.584 "num_allocated_clusters": 0, 00:30:12.584 "snapshot": false, 00:30:12.584 "clone": false, 00:30:12.584 "esnap_clone": false 00:30:12.584 } 00:30:12.584 } 00:30:12.584 } 00:30:12.584 ]' 00:30:12.584 12:32:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:12.584 12:32:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:30:12.584 12:32:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:12.842 12:32:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:12.842 12:32:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:12.842 12:32:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:30:12.842 12:32:22 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:12.842 12:32:22 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d bb81b00c-559e-48e5-8b3f-1c3be9e26c09 --l2p_dram_limit 10' 00:30:12.842 12:32:22 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:12.842 12:32:22 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:12.842 12:32:22 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:12.842 12:32:22 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:30:12.842 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:30:12.842 12:32:22 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bb81b00c-559e-48e5-8b3f-1c3be9e26c09 --l2p_dram_limit 10 -c nvc0n1p0 00:30:12.842 [2024-07-10 12:32:22.274708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.842 [2024-07-10 12:32:22.274808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:12.842 [2024-07-10 12:32:22.274828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:12.842 [2024-07-10 12:32:22.274842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.842 [2024-07-10 12:32:22.274914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.842 [2024-07-10 12:32:22.274930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:12.842 [2024-07-10 12:32:22.274941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:12.842 [2024-07-10 12:32:22.274955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.842 [2024-07-10 12:32:22.274977] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:12.842 [2024-07-10 12:32:22.276147] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:12.842 [2024-07-10 12:32:22.276175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.842 [2024-07-10 12:32:22.276194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:12.842 [2024-07-10 12:32:22.276206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.204 ms 00:30:12.842 [2024-07-10 12:32:22.276219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.842 [2024-07-10 12:32:22.276258] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1755b5ad-d915-4770-b62e-8f6a41c87fae 00:30:12.842 [2024-07-10 12:32:22.278714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.842 [2024-07-10 12:32:22.278755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:12.842 [2024-07-10 12:32:22.278772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:30:12.842 [2024-07-10 12:32:22.278782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.842 [2024-07-10 12:32:22.292156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.842 [2024-07-10 12:32:22.292191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:12.842 [2024-07-10 12:32:22.292211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.286 ms 00:30:12.842 [2024-07-10 12:32:22.292222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.842 [2024-07-10 12:32:22.292330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.842 [2024-07-10 12:32:22.292345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:12.842 [2024-07-10 12:32:22.292360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:30:12.842 [2024-07-10 12:32:22.292370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.842 [2024-07-10 12:32:22.292438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.842 [2024-07-10 12:32:22.292450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:12.842 [2024-07-10 12:32:22.292464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:12.842 [2024-07-10 12:32:22.292478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.842 [2024-07-10 12:32:22.292508] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:12.842 [2024-07-10 12:32:22.299211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.842 [2024-07-10 12:32:22.299248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:12.842 [2024-07-10 12:32:22.299260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.725 ms 00:30:12.842 [2024-07-10 12:32:22.299275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.842 [2024-07-10 12:32:22.299314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.842 [2024-07-10 12:32:22.299329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:12.842 [2024-07-10 12:32:22.299339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:12.842 [2024-07-10 12:32:22.299352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.842 [2024-07-10 12:32:22.299386] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:12.843 [2024-07-10 12:32:22.299519] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:12.843 [2024-07-10 12:32:22.299532] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:12.843 [2024-07-10 12:32:22.299551] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:12.843 [2024-07-10 12:32:22.299563] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:12.843 [2024-07-10 12:32:22.299578] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:12.843 [2024-07-10 12:32:22.299589] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:12.843 [2024-07-10 12:32:22.299602] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:12.843 [2024-07-10 12:32:22.299614] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:12.843 [2024-07-10 12:32:22.299627] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:12.843 [2024-07-10 12:32:22.299637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.843 [2024-07-10 12:32:22.299650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:12.843 [2024-07-10 12:32:22.299660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:30:12.843 [2024-07-10 12:32:22.299673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.843 [2024-07-10 12:32:22.299755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.843 [2024-07-10 12:32:22.299786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:12.843 [2024-07-10 12:32:22.299797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:12.843 [2024-07-10 12:32:22.299810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.843 [2024-07-10 12:32:22.299910] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:12.843 [2024-07-10 12:32:22.299928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:12.843 [2024-07-10 12:32:22.299949] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:12.843 [2024-07-10 12:32:22.299963] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.843 [2024-07-10 12:32:22.299974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:12.843 [2024-07-10 12:32:22.299986] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:12.843 [2024-07-10 12:32:22.299995] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:12.843 [2024-07-10 12:32:22.300007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:12.843 [2024-07-10 12:32:22.300018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:12.843 [2024-07-10 12:32:22.300030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:12.843 [2024-07-10 12:32:22.300039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:12.843 [2024-07-10 12:32:22.300051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:12.843 [2024-07-10 12:32:22.300069] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:12.843 [2024-07-10 12:32:22.300082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:12.843 [2024-07-10 12:32:22.300092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:12.843 [2024-07-10 12:32:22.300104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.843 [2024-07-10 12:32:22.300113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:12.843 [2024-07-10 12:32:22.300127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:12.843 [2024-07-10 12:32:22.300137] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.843 [2024-07-10 12:32:22.300149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:12.843 [2024-07-10 12:32:22.300158] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:12.843 [2024-07-10 12:32:22.300169] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.843 [2024-07-10 12:32:22.300179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:12.843 [2024-07-10 12:32:22.300191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:12.843 [2024-07-10 12:32:22.300200] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.843 [2024-07-10 12:32:22.300211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:12.843 [2024-07-10 12:32:22.300221] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:12.843 [2024-07-10 12:32:22.300232] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.843 [2024-07-10 12:32:22.300242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:12.843 [2024-07-10 12:32:22.300254] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:12.843 [2024-07-10 12:32:22.300263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.843 [2024-07-10 12:32:22.300275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:12.843 [2024-07-10 12:32:22.300285] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:12.843 [2024-07-10 12:32:22.300299] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:12.843 [2024-07-10 12:32:22.300308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:12.843 [2024-07-10 12:32:22.300320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:12.843 [2024-07-10 12:32:22.300329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:12.843 [2024-07-10 12:32:22.300340] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:12.843 [2024-07-10 12:32:22.300349] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:12.843 [2024-07-10 12:32:22.300363] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.843 [2024-07-10 12:32:22.300373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:12.843 [2024-07-10 12:32:22.300384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:12.843 [2024-07-10 12:32:22.300393] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.843 [2024-07-10 12:32:22.300404] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:12.843 [2024-07-10 12:32:22.300415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:12.843 [2024-07-10 12:32:22.300428] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:12.843 [2024-07-10 12:32:22.300438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.843 [2024-07-10 12:32:22.300450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:12.843 [2024-07-10 12:32:22.300460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:12.843 [2024-07-10 12:32:22.300474] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:12.843 [2024-07-10 12:32:22.300484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:12.843 [2024-07-10 12:32:22.300496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:12.843 [2024-07-10 12:32:22.300505] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:12.843 [2024-07-10 12:32:22.300521] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:12.843 [2024-07-10 12:32:22.300534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:12.843 [2024-07-10 12:32:22.300551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:12.843 [2024-07-10 12:32:22.300562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:12.843 [2024-07-10 12:32:22.300576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:12.843 [2024-07-10 12:32:22.300586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:12.843 [2024-07-10 12:32:22.300600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:12.843 [2024-07-10 12:32:22.300610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:12.843 [2024-07-10 12:32:22.300624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:12.843 [2024-07-10 12:32:22.300635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:12.843 [2024-07-10 12:32:22.300649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:12.843 [2024-07-10 12:32:22.300660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:12.843 [2024-07-10 12:32:22.300676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:12.843 [2024-07-10 12:32:22.300688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:12.843 [2024-07-10 12:32:22.300701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:12.843 [2024-07-10 12:32:22.300712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:12.843 [2024-07-10 12:32:22.300724] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:12.843 [2024-07-10 12:32:22.300746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:12.843 [2024-07-10 12:32:22.300762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:12.843 [2024-07-10 12:32:22.300773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:12.843 [2024-07-10 12:32:22.300786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:12.843 [2024-07-10 12:32:22.300797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:12.843 [2024-07-10 12:32:22.300811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.843 [2024-07-10 12:32:22.300822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:12.843 [2024-07-10 12:32:22.300835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:30:12.843 [2024-07-10 12:32:22.300846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.843 [2024-07-10 12:32:22.300893] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:12.843 [2024-07-10 12:32:22.300909] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:16.126 [2024-07-10 12:32:25.524656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.126 [2024-07-10 12:32:25.524747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:16.126 [2024-07-10 12:32:25.524771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3228.984 ms 00:30:16.126 [2024-07-10 12:32:25.524783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.126 [2024-07-10 12:32:25.574569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.126 [2024-07-10 12:32:25.574633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:16.126 [2024-07-10 12:32:25.574655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.470 ms 00:30:16.126 [2024-07-10 12:32:25.574667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.126 [2024-07-10 12:32:25.574848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.126 [2024-07-10 12:32:25.574863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:16.126 [2024-07-10 12:32:25.574878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:16.126 [2024-07-10 12:32:25.574892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.384 [2024-07-10 12:32:25.628711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.385 [2024-07-10 12:32:25.628778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:16.385 [2024-07-10 12:32:25.628796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.854 ms 00:30:16.385 [2024-07-10 12:32:25.628807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.385 [2024-07-10 12:32:25.628855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.385 [2024-07-10 12:32:25.628874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:16.385 [2024-07-10 12:32:25.628889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:16.385 [2024-07-10 12:32:25.628899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.385 [2024-07-10 12:32:25.629397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.385 [2024-07-10 12:32:25.629411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:16.385 [2024-07-10 12:32:25.629425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:30:16.385 [2024-07-10 12:32:25.629435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.385 [2024-07-10 12:32:25.629550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.385 [2024-07-10 12:32:25.629563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:16.385 [2024-07-10 12:32:25.629580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:30:16.385 [2024-07-10 12:32:25.629590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.385 [2024-07-10 12:32:25.651980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.385 [2024-07-10 12:32:25.652038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:16.385 [2024-07-10 12:32:25.652057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.399 ms 00:30:16.385 [2024-07-10 12:32:25.652090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.385 [2024-07-10 12:32:25.665158] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:16.385 [2024-07-10 12:32:25.668436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.385 [2024-07-10 12:32:25.668469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:16.385 [2024-07-10 12:32:25.668483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.257 ms 00:30:16.385 [2024-07-10 12:32:25.668495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.385 [2024-07-10 12:32:25.770529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.385 [2024-07-10 12:32:25.770610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:16.385 [2024-07-10 12:32:25.770630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.159 ms 00:30:16.385 [2024-07-10 12:32:25.770644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.385 [2024-07-10 12:32:25.770872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.385 [2024-07-10 12:32:25.770895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:16.385 [2024-07-10 12:32:25.770907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:30:16.385 [2024-07-10 12:32:25.770924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.385 [2024-07-10 12:32:25.808305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.385 [2024-07-10 12:32:25.808352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:16.385 [2024-07-10 12:32:25.808369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.389 ms 00:30:16.385 [2024-07-10 12:32:25.808382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.385 [2024-07-10 12:32:25.846729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.385 [2024-07-10 12:32:25.846787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:16.385 [2024-07-10 12:32:25.846805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.360 ms 00:30:16.385 [2024-07-10 12:32:25.846817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.385 [2024-07-10 12:32:25.847549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.385 [2024-07-10 12:32:25.847584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:16.385 [2024-07-10 12:32:25.847596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:30:16.385 [2024-07-10 12:32:25.847612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.643 [2024-07-10 12:32:25.954906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.644 [2024-07-10 12:32:25.954977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:16.644 [2024-07-10 12:32:25.954996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.412 ms 00:30:16.644 [2024-07-10 12:32:25.955014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.644 [2024-07-10 12:32:25.995648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.644 [2024-07-10 12:32:25.995704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:16.644 [2024-07-10 12:32:25.995721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.651 ms 00:30:16.644 [2024-07-10 12:32:25.995745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.644 [2024-07-10 12:32:26.033072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.644 [2024-07-10 12:32:26.033123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:16.644 [2024-07-10 12:32:26.033156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.341 ms 00:30:16.644 [2024-07-10 12:32:26.033169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.644 [2024-07-10 12:32:26.070551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.644 [2024-07-10 12:32:26.070598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:16.644 [2024-07-10 12:32:26.070613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.399 ms 00:30:16.644 [2024-07-10 12:32:26.070627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.644 [2024-07-10 12:32:26.070688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.644 [2024-07-10 12:32:26.070704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:16.644 [2024-07-10 12:32:26.070717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:30:16.644 [2024-07-10 12:32:26.070751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.644 [2024-07-10 12:32:26.070853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.644 [2024-07-10 12:32:26.070869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:16.644 [2024-07-10 12:32:26.070884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:16.644 [2024-07-10 12:32:26.070897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.644 [2024-07-10 12:32:26.071903] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3802.892 ms, result 0 00:30:16.644 { 00:30:16.644 "name": "ftl0", 00:30:16.644 "uuid": "1755b5ad-d915-4770-b62e-8f6a41c87fae" 00:30:16.644 } 00:30:16.644 12:32:26 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:16.644 12:32:26 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:16.928 12:32:26 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:30:16.928 12:32:26 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:17.187 [2024-07-10 12:32:26.458664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.187 [2024-07-10 12:32:26.458745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:17.187 [2024-07-10 12:32:26.458767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:17.187 [2024-07-10 12:32:26.458793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.187 [2024-07-10 12:32:26.458825] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:17.187 [2024-07-10 12:32:26.462993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.187 [2024-07-10 12:32:26.463030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:17.187 [2024-07-10 12:32:26.463043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.155 ms 00:30:17.187 [2024-07-10 12:32:26.463072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.187 [2024-07-10 12:32:26.463328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.187 [2024-07-10 12:32:26.463352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:17.187 [2024-07-10 12:32:26.463378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:30:17.187 [2024-07-10 12:32:26.463391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.187 [2024-07-10 12:32:26.465919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.187 [2024-07-10 12:32:26.465946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:17.187 [2024-07-10 12:32:26.465958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.514 ms 00:30:17.187 [2024-07-10 12:32:26.465971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.187 [2024-07-10 12:32:26.470888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.187 [2024-07-10 12:32:26.470928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:17.187 [2024-07-10 12:32:26.470945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.906 ms 00:30:17.187 [2024-07-10 12:32:26.470957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.187 [2024-07-10 12:32:26.509333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.187 [2024-07-10 12:32:26.509386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:17.187 [2024-07-10 12:32:26.509401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.372 ms 00:30:17.187 [2024-07-10 12:32:26.509431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.187 [2024-07-10 12:32:26.533564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.187 [2024-07-10 12:32:26.533619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:17.187 [2024-07-10 12:32:26.533634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.111 ms 00:30:17.187 [2024-07-10 12:32:26.533648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.187 [2024-07-10 12:32:26.533823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.187 [2024-07-10 12:32:26.533861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:17.187 [2024-07-10 12:32:26.533874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:30:17.187 [2024-07-10 12:32:26.533887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.187 [2024-07-10 12:32:26.570945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.187 [2024-07-10 12:32:26.570986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:17.187 [2024-07-10 12:32:26.571001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.098 ms 00:30:17.188 [2024-07-10 12:32:26.571014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.188 [2024-07-10 12:32:26.608053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.188 [2024-07-10 12:32:26.608100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:17.188 [2024-07-10 12:32:26.608114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.058 ms 00:30:17.188 [2024-07-10 12:32:26.608126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.188 [2024-07-10 12:32:26.646483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.188 [2024-07-10 12:32:26.646553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:17.188 [2024-07-10 12:32:26.646585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.360 ms 00:30:17.188 [2024-07-10 12:32:26.646598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.448 [2024-07-10 12:32:26.683695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.448 [2024-07-10 12:32:26.683745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:17.448 [2024-07-10 12:32:26.683775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.047 ms 00:30:17.448 [2024-07-10 12:32:26.683787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.448 [2024-07-10 12:32:26.683830] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:17.448 [2024-07-10 12:32:26.683852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.683881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.683894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.683905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.683920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.683931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.683961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.683972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.683989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:17.448 [2024-07-10 12:32:26.684636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.684999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.685010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.685026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.685037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.685051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.685062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.685075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.685086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.685100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.685112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.685127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.685139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.685154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.685165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:17.449 [2024-07-10 12:32:26.685186] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:17.449 [2024-07-10 12:32:26.685199] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1755b5ad-d915-4770-b62e-8f6a41c87fae 00:30:17.449 [2024-07-10 12:32:26.685214] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:17.449 [2024-07-10 12:32:26.685224] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:17.449 [2024-07-10 12:32:26.685239] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:17.449 [2024-07-10 12:32:26.685250] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:17.449 [2024-07-10 12:32:26.685263] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:17.449 [2024-07-10 12:32:26.685273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:17.449 [2024-07-10 12:32:26.685285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:17.449 [2024-07-10 12:32:26.685294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:17.449 [2024-07-10 12:32:26.685306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:17.449 [2024-07-10 12:32:26.685316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.449 [2024-07-10 12:32:26.685329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:17.449 [2024-07-10 12:32:26.685340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:30:17.449 [2024-07-10 12:32:26.685353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.449 [2024-07-10 12:32:26.705924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.449 [2024-07-10 12:32:26.705981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:17.449 [2024-07-10 12:32:26.705996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.546 ms 00:30:17.449 [2024-07-10 12:32:26.706009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.449 [2024-07-10 12:32:26.706515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.449 [2024-07-10 12:32:26.706533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:17.449 [2024-07-10 12:32:26.706544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:30:17.449 [2024-07-10 12:32:26.706562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.449 [2024-07-10 12:32:26.770418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.449 [2024-07-10 12:32:26.770492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:17.449 [2024-07-10 12:32:26.770526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.449 [2024-07-10 12:32:26.770539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.449 [2024-07-10 12:32:26.770620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.449 [2024-07-10 12:32:26.770635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:17.449 [2024-07-10 12:32:26.770646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.449 [2024-07-10 12:32:26.770662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.449 [2024-07-10 12:32:26.770797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.449 [2024-07-10 12:32:26.770817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:17.449 [2024-07-10 12:32:26.770829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.449 [2024-07-10 12:32:26.770841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.449 [2024-07-10 12:32:26.770863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.449 [2024-07-10 12:32:26.770880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:17.449 [2024-07-10 12:32:26.770891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.449 [2024-07-10 12:32:26.770903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.449 [2024-07-10 12:32:26.895529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.449 [2024-07-10 12:32:26.895608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:17.449 [2024-07-10 12:32:26.895626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.449 [2024-07-10 12:32:26.895640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.708 [2024-07-10 12:32:27.002640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.708 [2024-07-10 12:32:27.002720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:17.708 [2024-07-10 12:32:27.002753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.708 [2024-07-10 12:32:27.002772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.708 [2024-07-10 12:32:27.002901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.708 [2024-07-10 12:32:27.002917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:17.708 [2024-07-10 12:32:27.002928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.708 [2024-07-10 12:32:27.002941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.708 [2024-07-10 12:32:27.002992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.708 [2024-07-10 12:32:27.003012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:17.708 [2024-07-10 12:32:27.003023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.708 [2024-07-10 12:32:27.003035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.708 [2024-07-10 12:32:27.003172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.708 [2024-07-10 12:32:27.003188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:17.708 [2024-07-10 12:32:27.003199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.708 [2024-07-10 12:32:27.003213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.708 [2024-07-10 12:32:27.003251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.708 [2024-07-10 12:32:27.003268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:17.708 [2024-07-10 12:32:27.003279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.708 [2024-07-10 12:32:27.003291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.709 [2024-07-10 12:32:27.003338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.709 [2024-07-10 12:32:27.003353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:17.709 [2024-07-10 12:32:27.003363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.709 [2024-07-10 12:32:27.003376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.709 [2024-07-10 12:32:27.003422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.709 [2024-07-10 12:32:27.003440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:17.709 [2024-07-10 12:32:27.003451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.709 [2024-07-10 12:32:27.003463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.709 [2024-07-10 12:32:27.003606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 545.789 ms, result 0 00:30:17.709 true 00:30:17.709 12:32:27 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 81863 00:30:17.709 12:32:27 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 81863 ']' 00:30:17.709 12:32:27 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 81863 00:30:17.709 12:32:27 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:30:17.709 12:32:27 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:17.709 12:32:27 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81863 00:30:17.709 killing process with pid 81863 00:30:17.709 12:32:27 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:17.709 12:32:27 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:17.709 12:32:27 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81863' 00:30:17.709 12:32:27 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 81863 00:30:17.709 12:32:27 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 81863 00:30:23.057 12:32:32 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:27.257 262144+0 records in 00:30:27.257 262144+0 records out 00:30:27.257 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.92347 s, 274 MB/s 00:30:27.257 12:32:36 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:28.635 12:32:37 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:28.635 [2024-07-10 12:32:37.858340] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:30:28.635 [2024-07-10 12:32:37.858475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82096 ] 00:30:28.635 [2024-07-10 12:32:38.026288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.894 [2024-07-10 12:32:38.267818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.465 [2024-07-10 12:32:38.664252] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:29.465 [2024-07-10 12:32:38.664337] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:29.465 [2024-07-10 12:32:38.826011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.465 [2024-07-10 12:32:38.826094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:29.465 [2024-07-10 12:32:38.826113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:29.465 [2024-07-10 12:32:38.826124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.465 [2024-07-10 12:32:38.826189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.465 [2024-07-10 12:32:38.826203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:29.465 [2024-07-10 12:32:38.826214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:29.465 [2024-07-10 12:32:38.826227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.465 [2024-07-10 12:32:38.826249] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:29.465 [2024-07-10 12:32:38.827395] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:29.465 [2024-07-10 12:32:38.827432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.465 [2024-07-10 12:32:38.827447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:29.465 [2024-07-10 12:32:38.827459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:30:29.465 [2024-07-10 12:32:38.827469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.465 [2024-07-10 12:32:38.828950] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:29.465 [2024-07-10 12:32:38.849812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.465 [2024-07-10 12:32:38.849858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:29.465 [2024-07-10 12:32:38.849875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.896 ms 00:30:29.465 [2024-07-10 12:32:38.849886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.465 [2024-07-10 12:32:38.849959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.465 [2024-07-10 12:32:38.849972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:29.465 [2024-07-10 12:32:38.849986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:30:29.465 [2024-07-10 12:32:38.849996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.465 [2024-07-10 12:32:38.857048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.465 [2024-07-10 12:32:38.857079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:29.465 [2024-07-10 12:32:38.857093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.987 ms 00:30:29.465 [2024-07-10 12:32:38.857104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.465 [2024-07-10 12:32:38.857188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.465 [2024-07-10 12:32:38.857205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:29.465 [2024-07-10 12:32:38.857216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:29.465 [2024-07-10 12:32:38.857226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.465 [2024-07-10 12:32:38.857272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.465 [2024-07-10 12:32:38.857284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:29.465 [2024-07-10 12:32:38.857295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:29.465 [2024-07-10 12:32:38.857305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.465 [2024-07-10 12:32:38.857331] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:29.465 [2024-07-10 12:32:38.862857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.465 [2024-07-10 12:32:38.862889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:29.465 [2024-07-10 12:32:38.862902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.540 ms 00:30:29.465 [2024-07-10 12:32:38.862912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.465 [2024-07-10 12:32:38.862949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.465 [2024-07-10 12:32:38.862960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:29.465 [2024-07-10 12:32:38.862970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:29.465 [2024-07-10 12:32:38.862980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.465 [2024-07-10 12:32:38.863032] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:29.465 [2024-07-10 12:32:38.863056] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:29.465 [2024-07-10 12:32:38.863091] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:29.465 [2024-07-10 12:32:38.863112] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:30:29.465 [2024-07-10 12:32:38.863198] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:29.465 [2024-07-10 12:32:38.863211] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:29.466 [2024-07-10 12:32:38.863225] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:29.466 [2024-07-10 12:32:38.863238] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:29.466 [2024-07-10 12:32:38.863249] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:29.466 [2024-07-10 12:32:38.863261] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:29.466 [2024-07-10 12:32:38.863270] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:29.466 [2024-07-10 12:32:38.863280] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:29.466 [2024-07-10 12:32:38.863290] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:29.466 [2024-07-10 12:32:38.863300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.466 [2024-07-10 12:32:38.863313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:29.466 [2024-07-10 12:32:38.863324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:30:29.466 [2024-07-10 12:32:38.863333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.466 [2024-07-10 12:32:38.863399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.466 [2024-07-10 12:32:38.863409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:29.466 [2024-07-10 12:32:38.863419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:29.466 [2024-07-10 12:32:38.863428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.466 [2024-07-10 12:32:38.863509] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:29.466 [2024-07-10 12:32:38.863521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:29.466 [2024-07-10 12:32:38.863536] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:29.466 [2024-07-10 12:32:38.863546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.466 [2024-07-10 12:32:38.863556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:29.466 [2024-07-10 12:32:38.863565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:29.466 [2024-07-10 12:32:38.863574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:29.466 [2024-07-10 12:32:38.863584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:29.466 [2024-07-10 12:32:38.863594] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:29.466 [2024-07-10 12:32:38.863602] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:29.466 [2024-07-10 12:32:38.863612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:29.466 [2024-07-10 12:32:38.863621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:29.466 [2024-07-10 12:32:38.863630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:29.466 [2024-07-10 12:32:38.863640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:29.466 [2024-07-10 12:32:38.863650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:29.466 [2024-07-10 12:32:38.863659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.466 [2024-07-10 12:32:38.863668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:29.466 [2024-07-10 12:32:38.863677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:29.466 [2024-07-10 12:32:38.863686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.466 [2024-07-10 12:32:38.863694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:29.466 [2024-07-10 12:32:38.863717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:29.466 [2024-07-10 12:32:38.863726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.466 [2024-07-10 12:32:38.863753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:29.466 [2024-07-10 12:32:38.863762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:29.466 [2024-07-10 12:32:38.863771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.466 [2024-07-10 12:32:38.863780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:29.466 [2024-07-10 12:32:38.863789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:29.466 [2024-07-10 12:32:38.863799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.466 [2024-07-10 12:32:38.863808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:29.466 [2024-07-10 12:32:38.863817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:29.466 [2024-07-10 12:32:38.863827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.466 [2024-07-10 12:32:38.863836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:29.466 [2024-07-10 12:32:38.863845] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:29.466 [2024-07-10 12:32:38.863855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:29.466 [2024-07-10 12:32:38.863864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:29.466 [2024-07-10 12:32:38.863873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:29.466 [2024-07-10 12:32:38.863882] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:29.466 [2024-07-10 12:32:38.863891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:29.466 [2024-07-10 12:32:38.863900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:29.466 [2024-07-10 12:32:38.863909] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.466 [2024-07-10 12:32:38.863919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:29.466 [2024-07-10 12:32:38.863928] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:29.466 [2024-07-10 12:32:38.863938] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.466 [2024-07-10 12:32:38.863946] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:29.466 [2024-07-10 12:32:38.863957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:29.466 [2024-07-10 12:32:38.863968] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:29.466 [2024-07-10 12:32:38.863978] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.466 [2024-07-10 12:32:38.863988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:29.466 [2024-07-10 12:32:38.863998] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:29.466 [2024-07-10 12:32:38.864007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:29.466 [2024-07-10 12:32:38.864016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:29.466 [2024-07-10 12:32:38.864025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:29.466 [2024-07-10 12:32:38.864034] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:29.466 [2024-07-10 12:32:38.864045] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:29.466 [2024-07-10 12:32:38.864057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:29.466 [2024-07-10 12:32:38.864075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:29.466 [2024-07-10 12:32:38.864086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:29.466 [2024-07-10 12:32:38.864113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:29.466 [2024-07-10 12:32:38.864125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:29.466 [2024-07-10 12:32:38.864135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:29.466 [2024-07-10 12:32:38.864146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:29.466 [2024-07-10 12:32:38.864156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:29.466 [2024-07-10 12:32:38.864167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:29.466 [2024-07-10 12:32:38.864177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:29.466 [2024-07-10 12:32:38.864187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:29.466 [2024-07-10 12:32:38.864197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:29.466 [2024-07-10 12:32:38.864207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:29.466 [2024-07-10 12:32:38.864218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:29.466 [2024-07-10 12:32:38.864229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:29.466 [2024-07-10 12:32:38.864240] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:29.466 [2024-07-10 12:32:38.864252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:29.466 [2024-07-10 12:32:38.864263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:29.466 [2024-07-10 12:32:38.864274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:29.466 [2024-07-10 12:32:38.864285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:29.466 [2024-07-10 12:32:38.864295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:29.466 [2024-07-10 12:32:38.864306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.466 [2024-07-10 12:32:38.864320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:29.466 [2024-07-10 12:32:38.864330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:30:29.466 [2024-07-10 12:32:38.864340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.466 [2024-07-10 12:32:38.917347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.466 [2024-07-10 12:32:38.917596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:29.466 [2024-07-10 12:32:38.917717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.038 ms 00:30:29.466 [2024-07-10 12:32:38.917776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.466 [2024-07-10 12:32:38.917906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.466 [2024-07-10 12:32:38.917939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:29.466 [2024-07-10 12:32:38.918024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:29.466 [2024-07-10 12:32:38.918059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.725 [2024-07-10 12:32:38.968935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.725 [2024-07-10 12:32:38.969153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:29.725 [2024-07-10 12:32:38.969277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.851 ms 00:30:29.725 [2024-07-10 12:32:38.969314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.725 [2024-07-10 12:32:38.969395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.725 [2024-07-10 12:32:38.969430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:29.725 [2024-07-10 12:32:38.969460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:29.725 [2024-07-10 12:32:38.969537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.725 [2024-07-10 12:32:38.970080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.725 [2024-07-10 12:32:38.970135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:29.725 [2024-07-10 12:32:38.970216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:30:29.725 [2024-07-10 12:32:38.970251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.726 [2024-07-10 12:32:38.970401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.726 [2024-07-10 12:32:38.970479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:29.726 [2024-07-10 12:32:38.970517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:30:29.726 [2024-07-10 12:32:38.970547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.726 [2024-07-10 12:32:38.991635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.726 [2024-07-10 12:32:38.991837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:29.726 [2024-07-10 12:32:38.991986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.029 ms 00:30:29.726 [2024-07-10 12:32:38.992024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.726 [2024-07-10 12:32:39.013377] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:29.726 [2024-07-10 12:32:39.013554] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:29.726 [2024-07-10 12:32:39.013684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.726 [2024-07-10 12:32:39.013717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:29.726 [2024-07-10 12:32:39.013768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.526 ms 00:30:29.726 [2024-07-10 12:32:39.013800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.726 [2024-07-10 12:32:39.044154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.726 [2024-07-10 12:32:39.044208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:29.726 [2024-07-10 12:32:39.044225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.337 ms 00:30:29.726 [2024-07-10 12:32:39.044236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.726 [2024-07-10 12:32:39.065028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.726 [2024-07-10 12:32:39.065078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:29.726 [2024-07-10 12:32:39.065096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.759 ms 00:30:29.726 [2024-07-10 12:32:39.065106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.726 [2024-07-10 12:32:39.085705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.726 [2024-07-10 12:32:39.085778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:29.726 [2024-07-10 12:32:39.085795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.581 ms 00:30:29.726 [2024-07-10 12:32:39.085806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.726 [2024-07-10 12:32:39.086645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.726 [2024-07-10 12:32:39.086671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:29.726 [2024-07-10 12:32:39.086684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:30:29.726 [2024-07-10 12:32:39.086695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.726 [2024-07-10 12:32:39.177671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.726 [2024-07-10 12:32:39.177754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:29.726 [2024-07-10 12:32:39.177773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.101 ms 00:30:29.726 [2024-07-10 12:32:39.177784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.726 [2024-07-10 12:32:39.191152] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:29.726 [2024-07-10 12:32:39.194526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.726 [2024-07-10 12:32:39.194573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:29.726 [2024-07-10 12:32:39.194590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.706 ms 00:30:29.726 [2024-07-10 12:32:39.194601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.726 [2024-07-10 12:32:39.194717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.726 [2024-07-10 12:32:39.194742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:29.726 [2024-07-10 12:32:39.194756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:29.726 [2024-07-10 12:32:39.194766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.726 [2024-07-10 12:32:39.194844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.726 [2024-07-10 12:32:39.194856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:29.726 [2024-07-10 12:32:39.194873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:29.726 [2024-07-10 12:32:39.194883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.726 [2024-07-10 12:32:39.194907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.726 [2024-07-10 12:32:39.194918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:29.726 [2024-07-10 12:32:39.194929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:29.726 [2024-07-10 12:32:39.194939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.726 [2024-07-10 12:32:39.194969] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:29.726 [2024-07-10 12:32:39.194982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.726 [2024-07-10 12:32:39.194993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:29.726 [2024-07-10 12:32:39.195004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:29.726 [2024-07-10 12:32:39.195016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.985 [2024-07-10 12:32:39.232750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.985 [2024-07-10 12:32:39.232806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:29.985 [2024-07-10 12:32:39.232824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.771 ms 00:30:29.985 [2024-07-10 12:32:39.232835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.985 [2024-07-10 12:32:39.232919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.985 [2024-07-10 12:32:39.232933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:29.985 [2024-07-10 12:32:39.232954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:29.985 [2024-07-10 12:32:39.232965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.985 [2024-07-10 12:32:39.234144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 408.322 ms, result 0 00:31:07.968  Copying: 26/1024 [MB] (26 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 81/1024 [MB] (27 MBps) Copying: 109/1024 [MB] (27 MBps) Copying: 136/1024 [MB] (26 MBps) Copying: 163/1024 [MB] (27 MBps) Copying: 191/1024 [MB] (27 MBps) Copying: 219/1024 [MB] (28 MBps) Copying: 246/1024 [MB] (27 MBps) Copying: 274/1024 [MB] (27 MBps) Copying: 302/1024 [MB] (28 MBps) Copying: 331/1024 [MB] (28 MBps) Copying: 358/1024 [MB] (26 MBps) Copying: 384/1024 [MB] (26 MBps) Copying: 411/1024 [MB] (27 MBps) Copying: 437/1024 [MB] (25 MBps) Copying: 463/1024 [MB] (26 MBps) Copying: 490/1024 [MB] (26 MBps) Copying: 516/1024 [MB] (26 MBps) Copying: 543/1024 [MB] (26 MBps) Copying: 571/1024 [MB] (28 MBps) Copying: 598/1024 [MB] (26 MBps) Copying: 626/1024 [MB] (28 MBps) Copying: 654/1024 [MB] (27 MBps) Copying: 681/1024 [MB] (27 MBps) Copying: 708/1024 [MB] (26 MBps) Copying: 735/1024 [MB] (27 MBps) Copying: 763/1024 [MB] (27 MBps) Copying: 788/1024 [MB] (25 MBps) Copying: 813/1024 [MB] (24 MBps) Copying: 839/1024 [MB] (25 MBps) Copying: 864/1024 [MB] (25 MBps) Copying: 889/1024 [MB] (25 MBps) Copying: 916/1024 [MB] (26 MBps) Copying: 943/1024 [MB] (27 MBps) Copying: 970/1024 [MB] (27 MBps) Copying: 997/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-10 12:33:17.160950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.968 [2024-07-10 12:33:17.161016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:07.968 [2024-07-10 12:33:17.161034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:07.968 [2024-07-10 12:33:17.161046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.968 [2024-07-10 12:33:17.161068] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:07.968 [2024-07-10 12:33:17.164972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.968 [2024-07-10 12:33:17.165013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:07.968 [2024-07-10 12:33:17.165027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.890 ms 00:31:07.968 [2024-07-10 12:33:17.165038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.968 [2024-07-10 12:33:17.166813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.968 [2024-07-10 12:33:17.166851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:07.968 [2024-07-10 12:33:17.166871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.749 ms 00:31:07.968 [2024-07-10 12:33:17.166881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.968 [2024-07-10 12:33:17.184327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.968 [2024-07-10 12:33:17.184368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:07.968 [2024-07-10 12:33:17.184383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.456 ms 00:31:07.968 [2024-07-10 12:33:17.184393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.968 [2024-07-10 12:33:17.189406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.968 [2024-07-10 12:33:17.189438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:07.968 [2024-07-10 12:33:17.189457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.988 ms 00:31:07.968 [2024-07-10 12:33:17.189467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.968 [2024-07-10 12:33:17.228655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.968 [2024-07-10 12:33:17.228693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:07.968 [2024-07-10 12:33:17.228708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.196 ms 00:31:07.968 [2024-07-10 12:33:17.228718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.968 [2024-07-10 12:33:17.250891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.968 [2024-07-10 12:33:17.250928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:07.968 [2024-07-10 12:33:17.250943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.160 ms 00:31:07.968 [2024-07-10 12:33:17.250953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.968 [2024-07-10 12:33:17.251080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.968 [2024-07-10 12:33:17.251093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:07.968 [2024-07-10 12:33:17.251104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:31:07.968 [2024-07-10 12:33:17.251114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.968 [2024-07-10 12:33:17.289246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.968 [2024-07-10 12:33:17.289282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:31:07.968 [2024-07-10 12:33:17.289297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.173 ms 00:31:07.968 [2024-07-10 12:33:17.289307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.968 [2024-07-10 12:33:17.327023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.968 [2024-07-10 12:33:17.327059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:31:07.968 [2024-07-10 12:33:17.327072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.740 ms 00:31:07.968 [2024-07-10 12:33:17.327082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.968 [2024-07-10 12:33:17.365191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.968 [2024-07-10 12:33:17.365232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:07.968 [2024-07-10 12:33:17.365247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.133 ms 00:31:07.968 [2024-07-10 12:33:17.365271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.968 [2024-07-10 12:33:17.402788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.968 [2024-07-10 12:33:17.402824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:07.968 [2024-07-10 12:33:17.402837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.502 ms 00:31:07.968 [2024-07-10 12:33:17.402863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.968 [2024-07-10 12:33:17.402900] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:07.968 [2024-07-10 12:33:17.402918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:07.968 [2024-07-10 12:33:17.402931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:07.968 [2024-07-10 12:33:17.402943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.402955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.402967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.402978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.402989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.403982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:07.969 [2024-07-10 12:33:17.404447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:07.970 [2024-07-10 12:33:17.404469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:07.970 [2024-07-10 12:33:17.404492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:07.970 [2024-07-10 12:33:17.404524] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:07.970 [2024-07-10 12:33:17.404539] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1755b5ad-d915-4770-b62e-8f6a41c87fae 00:31:07.970 [2024-07-10 12:33:17.404555] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:07.970 [2024-07-10 12:33:17.404570] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:07.970 [2024-07-10 12:33:17.404584] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:07.970 [2024-07-10 12:33:17.404606] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:07.970 [2024-07-10 12:33:17.404620] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:07.970 [2024-07-10 12:33:17.404635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:07.970 [2024-07-10 12:33:17.404651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:07.970 [2024-07-10 12:33:17.404666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:07.970 [2024-07-10 12:33:17.404680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:07.970 [2024-07-10 12:33:17.404694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.970 [2024-07-10 12:33:17.404711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:07.970 [2024-07-10 12:33:17.404740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.798 ms 00:31:07.970 [2024-07-10 12:33:17.404758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.970 [2024-07-10 12:33:17.425263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.970 [2024-07-10 12:33:17.425304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:07.970 [2024-07-10 12:33:17.425317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.482 ms 00:31:07.970 [2024-07-10 12:33:17.425338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.970 [2024-07-10 12:33:17.425828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.970 [2024-07-10 12:33:17.425841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:07.970 [2024-07-10 12:33:17.425852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:31:07.970 [2024-07-10 12:33:17.425864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.230 [2024-07-10 12:33:17.469551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.230 [2024-07-10 12:33:17.469589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:08.230 [2024-07-10 12:33:17.469603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.230 [2024-07-10 12:33:17.469614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.230 [2024-07-10 12:33:17.469667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.230 [2024-07-10 12:33:17.469678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:08.230 [2024-07-10 12:33:17.469689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.230 [2024-07-10 12:33:17.469699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.230 [2024-07-10 12:33:17.469776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.230 [2024-07-10 12:33:17.469796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:08.230 [2024-07-10 12:33:17.469806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.230 [2024-07-10 12:33:17.469816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.230 [2024-07-10 12:33:17.469833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.230 [2024-07-10 12:33:17.469860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:08.230 [2024-07-10 12:33:17.469870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.230 [2024-07-10 12:33:17.469888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.230 [2024-07-10 12:33:17.588560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.230 [2024-07-10 12:33:17.588635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:08.230 [2024-07-10 12:33:17.588653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.230 [2024-07-10 12:33:17.588663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.230 [2024-07-10 12:33:17.692065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.230 [2024-07-10 12:33:17.692139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:08.230 [2024-07-10 12:33:17.692156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.230 [2024-07-10 12:33:17.692167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.230 [2024-07-10 12:33:17.692237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.230 [2024-07-10 12:33:17.692249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:08.230 [2024-07-10 12:33:17.692260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.230 [2024-07-10 12:33:17.692277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.230 [2024-07-10 12:33:17.692313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.230 [2024-07-10 12:33:17.692324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:08.230 [2024-07-10 12:33:17.692334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.230 [2024-07-10 12:33:17.692345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.230 [2024-07-10 12:33:17.692460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.230 [2024-07-10 12:33:17.692473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:08.230 [2024-07-10 12:33:17.692484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.230 [2024-07-10 12:33:17.692498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.230 [2024-07-10 12:33:17.692533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.230 [2024-07-10 12:33:17.692544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:08.230 [2024-07-10 12:33:17.692555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.230 [2024-07-10 12:33:17.692565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.230 [2024-07-10 12:33:17.692603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.230 [2024-07-10 12:33:17.692614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:08.230 [2024-07-10 12:33:17.692624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.230 [2024-07-10 12:33:17.692634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.230 [2024-07-10 12:33:17.692681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.230 [2024-07-10 12:33:17.692691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:08.230 [2024-07-10 12:33:17.692702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.230 [2024-07-10 12:33:17.692711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.230 [2024-07-10 12:33:17.692852] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.728 ms, result 0 00:31:09.642 00:31:09.642 00:31:09.642 12:33:19 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:31:09.901 [2024-07-10 12:33:19.168217] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:31:09.901 [2024-07-10 12:33:19.168345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82514 ] 00:31:09.901 [2024-07-10 12:33:19.337189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.160 [2024-07-10 12:33:19.571207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.729 [2024-07-10 12:33:19.965811] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:10.729 [2024-07-10 12:33:19.965884] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:10.729 [2024-07-10 12:33:20.126063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.729 [2024-07-10 12:33:20.126115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:10.729 [2024-07-10 12:33:20.126132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:10.729 [2024-07-10 12:33:20.126143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.729 [2024-07-10 12:33:20.126198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.729 [2024-07-10 12:33:20.126211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:10.729 [2024-07-10 12:33:20.126222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:10.729 [2024-07-10 12:33:20.126235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.729 [2024-07-10 12:33:20.126258] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:10.729 [2024-07-10 12:33:20.127432] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:10.729 [2024-07-10 12:33:20.127467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.729 [2024-07-10 12:33:20.127482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:10.729 [2024-07-10 12:33:20.127493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:31:10.729 [2024-07-10 12:33:20.127503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.729 [2024-07-10 12:33:20.129055] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:10.729 [2024-07-10 12:33:20.149422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.729 [2024-07-10 12:33:20.149461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:10.729 [2024-07-10 12:33:20.149477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.400 ms 00:31:10.729 [2024-07-10 12:33:20.149488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.729 [2024-07-10 12:33:20.149552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.729 [2024-07-10 12:33:20.149565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:10.729 [2024-07-10 12:33:20.149580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:10.729 [2024-07-10 12:33:20.149591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.729 [2024-07-10 12:33:20.156541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.729 [2024-07-10 12:33:20.156573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:10.729 [2024-07-10 12:33:20.156585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.892 ms 00:31:10.729 [2024-07-10 12:33:20.156595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.730 [2024-07-10 12:33:20.156674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.730 [2024-07-10 12:33:20.156690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:10.730 [2024-07-10 12:33:20.156701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:31:10.730 [2024-07-10 12:33:20.156712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.730 [2024-07-10 12:33:20.156761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.730 [2024-07-10 12:33:20.156773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:10.730 [2024-07-10 12:33:20.156784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:10.730 [2024-07-10 12:33:20.156793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.730 [2024-07-10 12:33:20.156819] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:10.730 [2024-07-10 12:33:20.162275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.730 [2024-07-10 12:33:20.162304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:10.730 [2024-07-10 12:33:20.162316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.471 ms 00:31:10.730 [2024-07-10 12:33:20.162342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.730 [2024-07-10 12:33:20.162377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.730 [2024-07-10 12:33:20.162388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:10.730 [2024-07-10 12:33:20.162399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:10.730 [2024-07-10 12:33:20.162408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.730 [2024-07-10 12:33:20.162457] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:10.730 [2024-07-10 12:33:20.162482] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:10.730 [2024-07-10 12:33:20.162515] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:10.730 [2024-07-10 12:33:20.162535] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:31:10.730 [2024-07-10 12:33:20.162615] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:10.730 [2024-07-10 12:33:20.162629] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:10.730 [2024-07-10 12:33:20.162641] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:10.730 [2024-07-10 12:33:20.162654] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:10.730 [2024-07-10 12:33:20.162666] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:10.730 [2024-07-10 12:33:20.162677] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:10.730 [2024-07-10 12:33:20.162687] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:10.730 [2024-07-10 12:33:20.162697] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:10.730 [2024-07-10 12:33:20.162706] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:10.730 [2024-07-10 12:33:20.162716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.730 [2024-07-10 12:33:20.162729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:10.730 [2024-07-10 12:33:20.162740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:31:10.730 [2024-07-10 12:33:20.162934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.730 [2024-07-10 12:33:20.163032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.730 [2024-07-10 12:33:20.163064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:10.730 [2024-07-10 12:33:20.163095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:10.730 [2024-07-10 12:33:20.163125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.730 [2024-07-10 12:33:20.163229] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:10.730 [2024-07-10 12:33:20.163325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:10.730 [2024-07-10 12:33:20.163427] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:10.730 [2024-07-10 12:33:20.163458] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:10.730 [2024-07-10 12:33:20.163488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:10.730 [2024-07-10 12:33:20.163517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:10.730 [2024-07-10 12:33:20.163546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:10.730 [2024-07-10 12:33:20.163575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:10.730 [2024-07-10 12:33:20.163586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:10.730 [2024-07-10 12:33:20.163596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:10.730 [2024-07-10 12:33:20.163605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:10.730 [2024-07-10 12:33:20.163614] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:10.730 [2024-07-10 12:33:20.163623] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:10.730 [2024-07-10 12:33:20.163633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:10.730 [2024-07-10 12:33:20.163642] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:10.730 [2024-07-10 12:33:20.163652] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:10.730 [2024-07-10 12:33:20.163663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:10.730 [2024-07-10 12:33:20.163672] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:10.730 [2024-07-10 12:33:20.163681] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:10.730 [2024-07-10 12:33:20.163690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:10.730 [2024-07-10 12:33:20.163710] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:10.730 [2024-07-10 12:33:20.163719] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:10.730 [2024-07-10 12:33:20.163738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:10.730 [2024-07-10 12:33:20.163747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:10.730 [2024-07-10 12:33:20.163757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:10.730 [2024-07-10 12:33:20.163766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:10.730 [2024-07-10 12:33:20.163776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:10.730 [2024-07-10 12:33:20.163785] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:10.730 [2024-07-10 12:33:20.163795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:10.730 [2024-07-10 12:33:20.163818] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:10.730 [2024-07-10 12:33:20.163828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:10.730 [2024-07-10 12:33:20.163838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:10.730 [2024-07-10 12:33:20.163847] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:10.730 [2024-07-10 12:33:20.163856] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:10.730 [2024-07-10 12:33:20.163865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:10.730 [2024-07-10 12:33:20.163875] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:10.730 [2024-07-10 12:33:20.163884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:10.730 [2024-07-10 12:33:20.163893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:10.730 [2024-07-10 12:33:20.163903] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:10.730 [2024-07-10 12:33:20.163912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:10.730 [2024-07-10 12:33:20.163921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:10.730 [2024-07-10 12:33:20.163930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:10.730 [2024-07-10 12:33:20.163940] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:10.730 [2024-07-10 12:33:20.163950] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:10.730 [2024-07-10 12:33:20.163960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:10.730 [2024-07-10 12:33:20.163970] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:10.730 [2024-07-10 12:33:20.163980] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:10.730 [2024-07-10 12:33:20.163990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:10.730 [2024-07-10 12:33:20.164000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:10.730 [2024-07-10 12:33:20.164010] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:10.730 [2024-07-10 12:33:20.164020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:10.730 [2024-07-10 12:33:20.164029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:10.730 [2024-07-10 12:33:20.164038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:10.730 [2024-07-10 12:33:20.164049] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:10.730 [2024-07-10 12:33:20.164062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:10.730 [2024-07-10 12:33:20.164082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:10.730 [2024-07-10 12:33:20.164094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:10.730 [2024-07-10 12:33:20.164104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:10.730 [2024-07-10 12:33:20.164115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:10.730 [2024-07-10 12:33:20.164127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:10.730 [2024-07-10 12:33:20.164138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:10.730 [2024-07-10 12:33:20.164149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:10.731 [2024-07-10 12:33:20.164160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:10.731 [2024-07-10 12:33:20.164171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:10.731 [2024-07-10 12:33:20.164181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:10.731 [2024-07-10 12:33:20.164191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:10.731 [2024-07-10 12:33:20.164201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:10.731 [2024-07-10 12:33:20.164212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:10.731 [2024-07-10 12:33:20.164223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:10.731 [2024-07-10 12:33:20.164233] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:10.731 [2024-07-10 12:33:20.164244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:10.731 [2024-07-10 12:33:20.164256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:10.731 [2024-07-10 12:33:20.164266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:10.731 [2024-07-10 12:33:20.164276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:10.731 [2024-07-10 12:33:20.164287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:10.731 [2024-07-10 12:33:20.164298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.731 [2024-07-10 12:33:20.164312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:10.731 [2024-07-10 12:33:20.164322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:31:10.731 [2024-07-10 12:33:20.164332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.991 [2024-07-10 12:33:20.215567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.991 [2024-07-10 12:33:20.215608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:10.991 [2024-07-10 12:33:20.215622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.269 ms 00:31:10.991 [2024-07-10 12:33:20.215632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.991 [2024-07-10 12:33:20.215709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.991 [2024-07-10 12:33:20.215720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:10.991 [2024-07-10 12:33:20.215744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:10.991 [2024-07-10 12:33:20.215755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.991 [2024-07-10 12:33:20.264972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.991 [2024-07-10 12:33:20.265008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:10.991 [2024-07-10 12:33:20.265023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.244 ms 00:31:10.991 [2024-07-10 12:33:20.265033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.991 [2024-07-10 12:33:20.265067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.991 [2024-07-10 12:33:20.265078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:10.991 [2024-07-10 12:33:20.265090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:10.991 [2024-07-10 12:33:20.265100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.991 [2024-07-10 12:33:20.265570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.991 [2024-07-10 12:33:20.265584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:10.991 [2024-07-10 12:33:20.265595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:31:10.991 [2024-07-10 12:33:20.265605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.991 [2024-07-10 12:33:20.265717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.991 [2024-07-10 12:33:20.265731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:10.991 [2024-07-10 12:33:20.265757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:31:10.991 [2024-07-10 12:33:20.265768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.991 [2024-07-10 12:33:20.286369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.991 [2024-07-10 12:33:20.286403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:10.991 [2024-07-10 12:33:20.286416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.613 ms 00:31:10.991 [2024-07-10 12:33:20.286441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.991 [2024-07-10 12:33:20.305721] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:10.991 [2024-07-10 12:33:20.305776] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:10.991 [2024-07-10 12:33:20.305792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.991 [2024-07-10 12:33:20.305803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:10.991 [2024-07-10 12:33:20.305815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.281 ms 00:31:10.991 [2024-07-10 12:33:20.305824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.991 [2024-07-10 12:33:20.335058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.991 [2024-07-10 12:33:20.335096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:10.991 [2024-07-10 12:33:20.335109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.229 ms 00:31:10.991 [2024-07-10 12:33:20.335126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.991 [2024-07-10 12:33:20.354254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.991 [2024-07-10 12:33:20.354288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:10.991 [2024-07-10 12:33:20.354301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.117 ms 00:31:10.991 [2024-07-10 12:33:20.354310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.991 [2024-07-10 12:33:20.373159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.991 [2024-07-10 12:33:20.373209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:10.991 [2024-07-10 12:33:20.373224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.841 ms 00:31:10.991 [2024-07-10 12:33:20.373234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.991 [2024-07-10 12:33:20.374057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.991 [2024-07-10 12:33:20.374091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:10.991 [2024-07-10 12:33:20.374104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:31:10.991 [2024-07-10 12:33:20.374115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.991 [2024-07-10 12:33:20.462955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.991 [2024-07-10 12:33:20.463022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:10.991 [2024-07-10 12:33:20.463039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.958 ms 00:31:10.991 [2024-07-10 12:33:20.463067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.251 [2024-07-10 12:33:20.474816] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:11.251 [2024-07-10 12:33:20.477548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.251 [2024-07-10 12:33:20.477578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:11.251 [2024-07-10 12:33:20.477594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.446 ms 00:31:11.251 [2024-07-10 12:33:20.477605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.251 [2024-07-10 12:33:20.477689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.251 [2024-07-10 12:33:20.477702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:11.251 [2024-07-10 12:33:20.477714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:11.251 [2024-07-10 12:33:20.477724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.251 [2024-07-10 12:33:20.477809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.251 [2024-07-10 12:33:20.477825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:11.251 [2024-07-10 12:33:20.477836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:11.251 [2024-07-10 12:33:20.477846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.251 [2024-07-10 12:33:20.477868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.251 [2024-07-10 12:33:20.477879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:11.251 [2024-07-10 12:33:20.477890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:11.251 [2024-07-10 12:33:20.477901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.251 [2024-07-10 12:33:20.477934] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:11.251 [2024-07-10 12:33:20.477946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.251 [2024-07-10 12:33:20.477956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:11.251 [2024-07-10 12:33:20.477970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:11.251 [2024-07-10 12:33:20.477980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.251 [2024-07-10 12:33:20.516319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.251 [2024-07-10 12:33:20.516478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:11.251 [2024-07-10 12:33:20.516561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.381 ms 00:31:11.251 [2024-07-10 12:33:20.516598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.251 [2024-07-10 12:33:20.516686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.251 [2024-07-10 12:33:20.516752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:11.251 [2024-07-10 12:33:20.516787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:11.251 [2024-07-10 12:33:20.516869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.251 [2024-07-10 12:33:20.517983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 392.087 ms, result 0 00:31:48.539  Copying: 27/1024 [MB] (27 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 81/1024 [MB] (26 MBps) Copying: 108/1024 [MB] (26 MBps) Copying: 136/1024 [MB] (27 MBps) Copying: 164/1024 [MB] (28 MBps) Copying: 193/1024 [MB] (28 MBps) Copying: 222/1024 [MB] (29 MBps) Copying: 250/1024 [MB] (28 MBps) Copying: 279/1024 [MB] (29 MBps) Copying: 307/1024 [MB] (28 MBps) Copying: 336/1024 [MB] (28 MBps) Copying: 364/1024 [MB] (27 MBps) Copying: 392/1024 [MB] (28 MBps) Copying: 419/1024 [MB] (27 MBps) Copying: 448/1024 [MB] (28 MBps) Copying: 475/1024 [MB] (27 MBps) Copying: 502/1024 [MB] (26 MBps) Copying: 529/1024 [MB] (26 MBps) Copying: 556/1024 [MB] (27 MBps) Copying: 584/1024 [MB] (27 MBps) Copying: 611/1024 [MB] (27 MBps) Copying: 639/1024 [MB] (27 MBps) Copying: 666/1024 [MB] (27 MBps) Copying: 694/1024 [MB] (28 MBps) Copying: 723/1024 [MB] (28 MBps) Copying: 751/1024 [MB] (28 MBps) Copying: 779/1024 [MB] (27 MBps) Copying: 806/1024 [MB] (26 MBps) Copying: 833/1024 [MB] (27 MBps) Copying: 859/1024 [MB] (25 MBps) Copying: 886/1024 [MB] (26 MBps) Copying: 912/1024 [MB] (26 MBps) Copying: 939/1024 [MB] (26 MBps) Copying: 965/1024 [MB] (26 MBps) Copying: 993/1024 [MB] (27 MBps) Copying: 1021/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-10 12:33:57.771756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.539 [2024-07-10 12:33:57.771824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:48.539 [2024-07-10 12:33:57.771842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:48.539 [2024-07-10 12:33:57.771854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.539 [2024-07-10 12:33:57.771877] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:48.539 [2024-07-10 12:33:57.776109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.539 [2024-07-10 12:33:57.776151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:48.539 [2024-07-10 12:33:57.776167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.220 ms 00:31:48.539 [2024-07-10 12:33:57.776178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.539 [2024-07-10 12:33:57.776381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.539 [2024-07-10 12:33:57.776393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:48.539 [2024-07-10 12:33:57.776404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:31:48.539 [2024-07-10 12:33:57.776415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.539 [2024-07-10 12:33:57.779180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.539 [2024-07-10 12:33:57.779204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:48.539 [2024-07-10 12:33:57.779215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.753 ms 00:31:48.539 [2024-07-10 12:33:57.779226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.539 [2024-07-10 12:33:57.784530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.539 [2024-07-10 12:33:57.784567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:48.539 [2024-07-10 12:33:57.784587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.293 ms 00:31:48.539 [2024-07-10 12:33:57.784597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.539 [2024-07-10 12:33:57.823922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.539 [2024-07-10 12:33:57.823966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:48.539 [2024-07-10 12:33:57.823981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.316 ms 00:31:48.539 [2024-07-10 12:33:57.824007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.539 [2024-07-10 12:33:57.846447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.539 [2024-07-10 12:33:57.846485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:48.539 [2024-07-10 12:33:57.846501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.435 ms 00:31:48.539 [2024-07-10 12:33:57.846512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.539 [2024-07-10 12:33:57.846645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.539 [2024-07-10 12:33:57.846659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:48.539 [2024-07-10 12:33:57.846670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:31:48.539 [2024-07-10 12:33:57.846685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.539 [2024-07-10 12:33:57.885658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.539 [2024-07-10 12:33:57.885696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:31:48.539 [2024-07-10 12:33:57.885710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.018 ms 00:31:48.539 [2024-07-10 12:33:57.885720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.539 [2024-07-10 12:33:57.923563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.539 [2024-07-10 12:33:57.923605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:31:48.539 [2024-07-10 12:33:57.923620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.857 ms 00:31:48.539 [2024-07-10 12:33:57.923630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.539 [2024-07-10 12:33:57.960554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.539 [2024-07-10 12:33:57.960591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:48.539 [2024-07-10 12:33:57.960621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.944 ms 00:31:48.539 [2024-07-10 12:33:57.960632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.539 [2024-07-10 12:33:57.997650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.539 [2024-07-10 12:33:57.997688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:48.539 [2024-07-10 12:33:57.997702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.000 ms 00:31:48.539 [2024-07-10 12:33:57.997712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.539 [2024-07-10 12:33:57.997760] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:48.539 [2024-07-10 12:33:57.997793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.997996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:48.539 [2024-07-10 12:33:57.998700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:48.540 [2024-07-10 12:33:57.998928] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:48.540 [2024-07-10 12:33:57.998938] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1755b5ad-d915-4770-b62e-8f6a41c87fae 00:31:48.540 [2024-07-10 12:33:57.998950] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:48.540 [2024-07-10 12:33:57.998960] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:48.540 [2024-07-10 12:33:57.998975] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:48.540 [2024-07-10 12:33:57.998986] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:48.540 [2024-07-10 12:33:57.998996] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:48.540 [2024-07-10 12:33:57.999006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:48.540 [2024-07-10 12:33:57.999016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:48.540 [2024-07-10 12:33:57.999025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:48.540 [2024-07-10 12:33:57.999035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:48.540 [2024-07-10 12:33:57.999046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.540 [2024-07-10 12:33:57.999056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:48.540 [2024-07-10 12:33:57.999067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:31:48.540 [2024-07-10 12:33:57.999077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.823 [2024-07-10 12:33:58.018953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.823 [2024-07-10 12:33:58.018988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:48.823 [2024-07-10 12:33:58.019012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.868 ms 00:31:48.823 [2024-07-10 12:33:58.019022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.823 [2024-07-10 12:33:58.019507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:48.823 [2024-07-10 12:33:58.019518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:48.823 [2024-07-10 12:33:58.019529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:31:48.823 [2024-07-10 12:33:58.019539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.823 [2024-07-10 12:33:58.064931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.823 [2024-07-10 12:33:58.064967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:48.823 [2024-07-10 12:33:58.064981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.823 [2024-07-10 12:33:58.064992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.823 [2024-07-10 12:33:58.065045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.823 [2024-07-10 12:33:58.065056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:48.823 [2024-07-10 12:33:58.065067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.823 [2024-07-10 12:33:58.065077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.823 [2024-07-10 12:33:58.065148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.823 [2024-07-10 12:33:58.065161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:48.823 [2024-07-10 12:33:58.065172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.823 [2024-07-10 12:33:58.065182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.823 [2024-07-10 12:33:58.065199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.823 [2024-07-10 12:33:58.065210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:48.823 [2024-07-10 12:33:58.065220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.823 [2024-07-10 12:33:58.065231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.823 [2024-07-10 12:33:58.186666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.823 [2024-07-10 12:33:58.186721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:48.823 [2024-07-10 12:33:58.186751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.823 [2024-07-10 12:33:58.186763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.823 [2024-07-10 12:33:58.286420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.823 [2024-07-10 12:33:58.286487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:48.823 [2024-07-10 12:33:58.286504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.823 [2024-07-10 12:33:58.286515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.823 [2024-07-10 12:33:58.286584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.823 [2024-07-10 12:33:58.286597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:48.823 [2024-07-10 12:33:58.286613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.823 [2024-07-10 12:33:58.286624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.823 [2024-07-10 12:33:58.286661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.824 [2024-07-10 12:33:58.286672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:48.824 [2024-07-10 12:33:58.286683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.824 [2024-07-10 12:33:58.286693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.824 [2024-07-10 12:33:58.286862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.824 [2024-07-10 12:33:58.286876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:48.824 [2024-07-10 12:33:58.286892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.824 [2024-07-10 12:33:58.286902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.824 [2024-07-10 12:33:58.286938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.824 [2024-07-10 12:33:58.286950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:48.824 [2024-07-10 12:33:58.286961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.824 [2024-07-10 12:33:58.286972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.824 [2024-07-10 12:33:58.287010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.824 [2024-07-10 12:33:58.287021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:48.824 [2024-07-10 12:33:58.287031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.824 [2024-07-10 12:33:58.287046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.824 [2024-07-10 12:33:58.287090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:48.824 [2024-07-10 12:33:58.287101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:48.824 [2024-07-10 12:33:58.287112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:48.824 [2024-07-10 12:33:58.287122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:48.824 [2024-07-10 12:33:58.287247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 516.318 ms, result 0 00:31:50.199 00:31:50.199 00:31:50.199 12:33:59 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:52.100 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:52.100 12:34:01 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:52.100 [2024-07-10 12:34:01.326954] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:31:52.100 [2024-07-10 12:34:01.327086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82940 ] 00:31:52.100 [2024-07-10 12:34:01.498117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.359 [2024-07-10 12:34:01.732174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.926 [2024-07-10 12:34:02.124763] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:52.926 [2024-07-10 12:34:02.124839] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:52.926 [2024-07-10 12:34:02.285374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.926 [2024-07-10 12:34:02.285439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:52.926 [2024-07-10 12:34:02.285456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:52.926 [2024-07-10 12:34:02.285467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.926 [2024-07-10 12:34:02.285524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.926 [2024-07-10 12:34:02.285538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:52.926 [2024-07-10 12:34:02.285549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:52.926 [2024-07-10 12:34:02.285562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.926 [2024-07-10 12:34:02.285583] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:52.926 [2024-07-10 12:34:02.286669] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:52.926 [2024-07-10 12:34:02.286699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.926 [2024-07-10 12:34:02.286713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:52.926 [2024-07-10 12:34:02.286724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:31:52.926 [2024-07-10 12:34:02.286742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.926 [2024-07-10 12:34:02.288179] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:52.926 [2024-07-10 12:34:02.308585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.926 [2024-07-10 12:34:02.308631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:52.926 [2024-07-10 12:34:02.308647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.439 ms 00:31:52.926 [2024-07-10 12:34:02.308657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.926 [2024-07-10 12:34:02.308759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.926 [2024-07-10 12:34:02.308774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:52.926 [2024-07-10 12:34:02.308789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:52.926 [2024-07-10 12:34:02.308799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.926 [2024-07-10 12:34:02.315623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.926 [2024-07-10 12:34:02.315656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:52.926 [2024-07-10 12:34:02.315668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.764 ms 00:31:52.926 [2024-07-10 12:34:02.315679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.926 [2024-07-10 12:34:02.315773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.926 [2024-07-10 12:34:02.315791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:52.926 [2024-07-10 12:34:02.315803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:31:52.926 [2024-07-10 12:34:02.315814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.926 [2024-07-10 12:34:02.315858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.926 [2024-07-10 12:34:02.315870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:52.926 [2024-07-10 12:34:02.315881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:52.926 [2024-07-10 12:34:02.315891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.926 [2024-07-10 12:34:02.315917] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:52.926 [2024-07-10 12:34:02.321566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.926 [2024-07-10 12:34:02.321600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:52.926 [2024-07-10 12:34:02.321612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.665 ms 00:31:52.926 [2024-07-10 12:34:02.321622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.926 [2024-07-10 12:34:02.321660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.926 [2024-07-10 12:34:02.321671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:52.926 [2024-07-10 12:34:02.321682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:52.926 [2024-07-10 12:34:02.321692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.926 [2024-07-10 12:34:02.321753] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:52.926 [2024-07-10 12:34:02.321781] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:52.926 [2024-07-10 12:34:02.321817] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:52.926 [2024-07-10 12:34:02.321838] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:31:52.926 [2024-07-10 12:34:02.321936] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:52.926 [2024-07-10 12:34:02.321949] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:52.926 [2024-07-10 12:34:02.321963] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:52.926 [2024-07-10 12:34:02.321976] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:52.926 [2024-07-10 12:34:02.321989] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:52.926 [2024-07-10 12:34:02.322001] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:52.927 [2024-07-10 12:34:02.322012] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:52.927 [2024-07-10 12:34:02.322021] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:52.927 [2024-07-10 12:34:02.322031] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:52.927 [2024-07-10 12:34:02.322042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.927 [2024-07-10 12:34:02.322055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:52.927 [2024-07-10 12:34:02.322065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:31:52.927 [2024-07-10 12:34:02.322076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.927 [2024-07-10 12:34:02.322147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.927 [2024-07-10 12:34:02.322157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:52.927 [2024-07-10 12:34:02.322167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:52.927 [2024-07-10 12:34:02.322176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.927 [2024-07-10 12:34:02.322256] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:52.927 [2024-07-10 12:34:02.322268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:52.927 [2024-07-10 12:34:02.322282] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:52.927 [2024-07-10 12:34:02.322292] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.927 [2024-07-10 12:34:02.322302] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:52.927 [2024-07-10 12:34:02.322311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:52.927 [2024-07-10 12:34:02.322320] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:52.927 [2024-07-10 12:34:02.322330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:52.927 [2024-07-10 12:34:02.322339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:52.927 [2024-07-10 12:34:02.322349] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:52.927 [2024-07-10 12:34:02.322359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:52.927 [2024-07-10 12:34:02.322368] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:52.927 [2024-07-10 12:34:02.322377] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:52.927 [2024-07-10 12:34:02.322387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:52.927 [2024-07-10 12:34:02.322396] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:52.927 [2024-07-10 12:34:02.322406] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.927 [2024-07-10 12:34:02.322415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:52.927 [2024-07-10 12:34:02.322424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:52.927 [2024-07-10 12:34:02.322433] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.927 [2024-07-10 12:34:02.322443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:52.927 [2024-07-10 12:34:02.322463] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:52.927 [2024-07-10 12:34:02.322473] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:52.927 [2024-07-10 12:34:02.322483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:52.927 [2024-07-10 12:34:02.322492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:52.927 [2024-07-10 12:34:02.322501] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:52.927 [2024-07-10 12:34:02.322510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:52.927 [2024-07-10 12:34:02.322520] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:52.927 [2024-07-10 12:34:02.322529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:52.927 [2024-07-10 12:34:02.322538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:52.927 [2024-07-10 12:34:02.322548] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:52.927 [2024-07-10 12:34:02.322557] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:52.927 [2024-07-10 12:34:02.322566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:52.927 [2024-07-10 12:34:02.322575] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:52.927 [2024-07-10 12:34:02.322585] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:52.927 [2024-07-10 12:34:02.322594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:52.927 [2024-07-10 12:34:02.322603] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:52.927 [2024-07-10 12:34:02.322612] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:52.927 [2024-07-10 12:34:02.322621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:52.927 [2024-07-10 12:34:02.322630] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:52.927 [2024-07-10 12:34:02.322639] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.927 [2024-07-10 12:34:02.322648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:52.927 [2024-07-10 12:34:02.322658] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:52.927 [2024-07-10 12:34:02.322667] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.927 [2024-07-10 12:34:02.322676] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:52.927 [2024-07-10 12:34:02.322686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:52.927 [2024-07-10 12:34:02.322696] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:52.927 [2024-07-10 12:34:02.322705] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.927 [2024-07-10 12:34:02.322715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:52.927 [2024-07-10 12:34:02.322725] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:52.927 [2024-07-10 12:34:02.322745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:52.927 [2024-07-10 12:34:02.322754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:52.927 [2024-07-10 12:34:02.322763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:52.927 [2024-07-10 12:34:02.322772] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:52.927 [2024-07-10 12:34:02.322782] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:52.927 [2024-07-10 12:34:02.322794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:52.927 [2024-07-10 12:34:02.322806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:52.927 [2024-07-10 12:34:02.322816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:52.927 [2024-07-10 12:34:02.322826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:52.927 [2024-07-10 12:34:02.322837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:52.927 [2024-07-10 12:34:02.322848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:52.927 [2024-07-10 12:34:02.322858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:52.927 [2024-07-10 12:34:02.322869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:52.927 [2024-07-10 12:34:02.322879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:52.927 [2024-07-10 12:34:02.322890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:52.927 [2024-07-10 12:34:02.322899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:52.927 [2024-07-10 12:34:02.322910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:52.927 [2024-07-10 12:34:02.322920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:52.927 [2024-07-10 12:34:02.322930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:52.927 [2024-07-10 12:34:02.322941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:52.927 [2024-07-10 12:34:02.322950] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:52.927 [2024-07-10 12:34:02.322962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:52.927 [2024-07-10 12:34:02.322973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:52.927 [2024-07-10 12:34:02.322983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:52.927 [2024-07-10 12:34:02.322994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:52.927 [2024-07-10 12:34:02.323005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:52.927 [2024-07-10 12:34:02.323017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.927 [2024-07-10 12:34:02.323031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:52.927 [2024-07-10 12:34:02.323041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:31:52.927 [2024-07-10 12:34:02.323050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.927 [2024-07-10 12:34:02.376141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.927 [2024-07-10 12:34:02.376201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:52.927 [2024-07-10 12:34:02.376217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.123 ms 00:31:52.927 [2024-07-10 12:34:02.376228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.927 [2024-07-10 12:34:02.376324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.927 [2024-07-10 12:34:02.376336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:52.927 [2024-07-10 12:34:02.376347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:52.927 [2024-07-10 12:34:02.376357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.184 [2024-07-10 12:34:02.426496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.184 [2024-07-10 12:34:02.426557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:53.184 [2024-07-10 12:34:02.426574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.147 ms 00:31:53.184 [2024-07-10 12:34:02.426584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.184 [2024-07-10 12:34:02.426643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.184 [2024-07-10 12:34:02.426654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:53.184 [2024-07-10 12:34:02.426665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:53.184 [2024-07-10 12:34:02.426676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.184 [2024-07-10 12:34:02.427190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.184 [2024-07-10 12:34:02.427207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:53.184 [2024-07-10 12:34:02.427218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:31:53.184 [2024-07-10 12:34:02.427229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.184 [2024-07-10 12:34:02.427350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.184 [2024-07-10 12:34:02.427364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:53.184 [2024-07-10 12:34:02.427374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:31:53.184 [2024-07-10 12:34:02.427385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.184 [2024-07-10 12:34:02.448043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.184 [2024-07-10 12:34:02.448101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:53.184 [2024-07-10 12:34:02.448118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.669 ms 00:31:53.184 [2024-07-10 12:34:02.448128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.184 [2024-07-10 12:34:02.468504] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:53.184 [2024-07-10 12:34:02.468554] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:53.184 [2024-07-10 12:34:02.468571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.184 [2024-07-10 12:34:02.468582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:53.184 [2024-07-10 12:34:02.468595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.334 ms 00:31:53.184 [2024-07-10 12:34:02.468605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.184 [2024-07-10 12:34:02.498379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.184 [2024-07-10 12:34:02.498447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:53.184 [2024-07-10 12:34:02.498464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.773 ms 00:31:53.185 [2024-07-10 12:34:02.498481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.185 [2024-07-10 12:34:02.517738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.185 [2024-07-10 12:34:02.517786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:53.185 [2024-07-10 12:34:02.517801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.234 ms 00:31:53.185 [2024-07-10 12:34:02.517812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.185 [2024-07-10 12:34:02.536741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.185 [2024-07-10 12:34:02.536783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:53.185 [2024-07-10 12:34:02.536797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.915 ms 00:31:53.185 [2024-07-10 12:34:02.536807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.185 [2024-07-10 12:34:02.537667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.185 [2024-07-10 12:34:02.537691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:53.185 [2024-07-10 12:34:02.537704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:31:53.185 [2024-07-10 12:34:02.537714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.185 [2024-07-10 12:34:02.627552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.185 [2024-07-10 12:34:02.627619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:53.185 [2024-07-10 12:34:02.627637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.948 ms 00:31:53.185 [2024-07-10 12:34:02.627648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.185 [2024-07-10 12:34:02.640187] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:53.185 [2024-07-10 12:34:02.643419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.185 [2024-07-10 12:34:02.643454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:53.185 [2024-07-10 12:34:02.643470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.729 ms 00:31:53.185 [2024-07-10 12:34:02.643480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.185 [2024-07-10 12:34:02.643587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.185 [2024-07-10 12:34:02.643601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:53.185 [2024-07-10 12:34:02.643613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:53.185 [2024-07-10 12:34:02.643623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.185 [2024-07-10 12:34:02.643695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.185 [2024-07-10 12:34:02.643711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:53.185 [2024-07-10 12:34:02.643722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:53.185 [2024-07-10 12:34:02.643744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.185 [2024-07-10 12:34:02.643781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.185 [2024-07-10 12:34:02.643793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:53.185 [2024-07-10 12:34:02.643803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:53.185 [2024-07-10 12:34:02.643814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.185 [2024-07-10 12:34:02.643850] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:53.185 [2024-07-10 12:34:02.643863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.185 [2024-07-10 12:34:02.643873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:53.185 [2024-07-10 12:34:02.643886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:53.185 [2024-07-10 12:34:02.643895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.442 [2024-07-10 12:34:02.683095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.442 [2024-07-10 12:34:02.683142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:53.443 [2024-07-10 12:34:02.683160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.241 ms 00:31:53.443 [2024-07-10 12:34:02.683170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.443 [2024-07-10 12:34:02.683249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.443 [2024-07-10 12:34:02.683271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:53.443 [2024-07-10 12:34:02.683283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:53.443 [2024-07-10 12:34:02.683293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.443 [2024-07-10 12:34:02.684432] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 399.235 ms, result 0 00:32:31.006  Copying: 27/1024 [MB] (27 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 81/1024 [MB] (27 MBps) Copying: 108/1024 [MB] (27 MBps) Copying: 135/1024 [MB] (26 MBps) Copying: 162/1024 [MB] (26 MBps) Copying: 188/1024 [MB] (26 MBps) Copying: 216/1024 [MB] (28 MBps) Copying: 244/1024 [MB] (28 MBps) Copying: 272/1024 [MB] (27 MBps) Copying: 299/1024 [MB] (26 MBps) Copying: 327/1024 [MB] (28 MBps) Copying: 354/1024 [MB] (27 MBps) Copying: 382/1024 [MB] (27 MBps) Copying: 409/1024 [MB] (27 MBps) Copying: 436/1024 [MB] (26 MBps) Copying: 463/1024 [MB] (26 MBps) Copying: 490/1024 [MB] (27 MBps) Copying: 518/1024 [MB] (27 MBps) Copying: 546/1024 [MB] (28 MBps) Copying: 574/1024 [MB] (27 MBps) Copying: 603/1024 [MB] (28 MBps) Copying: 631/1024 [MB] (27 MBps) Copying: 659/1024 [MB] (27 MBps) Copying: 686/1024 [MB] (27 MBps) Copying: 715/1024 [MB] (28 MBps) Copying: 743/1024 [MB] (27 MBps) Copying: 771/1024 [MB] (28 MBps) Copying: 800/1024 [MB] (28 MBps) Copying: 828/1024 [MB] (27 MBps) Copying: 855/1024 [MB] (27 MBps) Copying: 883/1024 [MB] (27 MBps) Copying: 910/1024 [MB] (27 MBps) Copying: 941/1024 [MB] (30 MBps) Copying: 969/1024 [MB] (28 MBps) Copying: 997/1024 [MB] (27 MBps) Copying: 1023/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-10 12:34:40.312106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.006 [2024-07-10 12:34:40.312178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:31.006 [2024-07-10 12:34:40.312197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:31.006 [2024-07-10 12:34:40.312207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.006 [2024-07-10 12:34:40.312984] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:31.006 [2024-07-10 12:34:40.318654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.006 [2024-07-10 12:34:40.318696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:31.006 [2024-07-10 12:34:40.318710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.636 ms 00:32:31.006 [2024-07-10 12:34:40.318721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.006 [2024-07-10 12:34:40.330581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.006 [2024-07-10 12:34:40.330625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:31.006 [2024-07-10 12:34:40.330640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.237 ms 00:32:31.006 [2024-07-10 12:34:40.330650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.006 [2024-07-10 12:34:40.353965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.006 [2024-07-10 12:34:40.354030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:31.006 [2024-07-10 12:34:40.354069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.332 ms 00:32:31.006 [2024-07-10 12:34:40.354080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.006 [2024-07-10 12:34:40.359076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.006 [2024-07-10 12:34:40.359109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:31.006 [2024-07-10 12:34:40.359121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.967 ms 00:32:31.006 [2024-07-10 12:34:40.359132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.006 [2024-07-10 12:34:40.398494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.006 [2024-07-10 12:34:40.398533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:31.006 [2024-07-10 12:34:40.398548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.379 ms 00:32:31.006 [2024-07-10 12:34:40.398558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.006 [2024-07-10 12:34:40.420800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.006 [2024-07-10 12:34:40.420838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:31.006 [2024-07-10 12:34:40.420853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.239 ms 00:32:31.006 [2024-07-10 12:34:40.420870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.267 [2024-07-10 12:34:40.522923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.267 [2024-07-10 12:34:40.522992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:31.267 [2024-07-10 12:34:40.523008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.173 ms 00:32:31.267 [2024-07-10 12:34:40.523019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.267 [2024-07-10 12:34:40.562783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.267 [2024-07-10 12:34:40.562834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:32:31.267 [2024-07-10 12:34:40.562850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.811 ms 00:32:31.267 [2024-07-10 12:34:40.562860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.267 [2024-07-10 12:34:40.601963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.267 [2024-07-10 12:34:40.601998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:32:31.267 [2024-07-10 12:34:40.602012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.130 ms 00:32:31.267 [2024-07-10 12:34:40.602022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.267 [2024-07-10 12:34:40.640171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.267 [2024-07-10 12:34:40.640205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:31.267 [2024-07-10 12:34:40.640231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.175 ms 00:32:31.267 [2024-07-10 12:34:40.640241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.267 [2024-07-10 12:34:40.678591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.267 [2024-07-10 12:34:40.678627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:31.267 [2024-07-10 12:34:40.678640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.338 ms 00:32:31.267 [2024-07-10 12:34:40.678649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.267 [2024-07-10 12:34:40.678685] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:31.267 [2024-07-10 12:34:40.678702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102400 / 261120 wr_cnt: 1 state: open 00:32:31.267 [2024-07-10 12:34:40.678715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.678999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:31.267 [2024-07-10 12:34:40.679515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:31.268 [2024-07-10 12:34:40.679806] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:31.268 [2024-07-10 12:34:40.679816] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1755b5ad-d915-4770-b62e-8f6a41c87fae 00:32:31.268 [2024-07-10 12:34:40.679827] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102400 00:32:31.268 [2024-07-10 12:34:40.679837] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103360 00:32:31.268 [2024-07-10 12:34:40.679846] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102400 00:32:31.268 [2024-07-10 12:34:40.679856] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:32:31.268 [2024-07-10 12:34:40.679866] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:31.268 [2024-07-10 12:34:40.679879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:31.268 [2024-07-10 12:34:40.679889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:31.268 [2024-07-10 12:34:40.679898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:31.268 [2024-07-10 12:34:40.679907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:31.268 [2024-07-10 12:34:40.679917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.268 [2024-07-10 12:34:40.679930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:31.268 [2024-07-10 12:34:40.679940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:32:31.268 [2024-07-10 12:34:40.679950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.268 [2024-07-10 12:34:40.700528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.268 [2024-07-10 12:34:40.700562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:31.268 [2024-07-10 12:34:40.700587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.579 ms 00:32:31.268 [2024-07-10 12:34:40.700597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.268 [2024-07-10 12:34:40.701079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:31.268 [2024-07-10 12:34:40.701096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:31.268 [2024-07-10 12:34:40.701108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:32:31.268 [2024-07-10 12:34:40.701117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.527 [2024-07-10 12:34:40.746663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:31.527 [2024-07-10 12:34:40.746710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:31.527 [2024-07-10 12:34:40.746725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:31.527 [2024-07-10 12:34:40.746749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.527 [2024-07-10 12:34:40.746823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:31.527 [2024-07-10 12:34:40.746835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:31.527 [2024-07-10 12:34:40.746846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:31.527 [2024-07-10 12:34:40.746856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.527 [2024-07-10 12:34:40.746922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:31.527 [2024-07-10 12:34:40.746935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:31.527 [2024-07-10 12:34:40.746945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:31.527 [2024-07-10 12:34:40.746955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.527 [2024-07-10 12:34:40.746976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:31.527 [2024-07-10 12:34:40.746987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:31.527 [2024-07-10 12:34:40.746998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:31.527 [2024-07-10 12:34:40.747007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.527 [2024-07-10 12:34:40.871067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:31.527 [2024-07-10 12:34:40.871144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:31.527 [2024-07-10 12:34:40.871161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:31.527 [2024-07-10 12:34:40.871172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.527 [2024-07-10 12:34:40.975665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:31.527 [2024-07-10 12:34:40.975745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:31.527 [2024-07-10 12:34:40.975762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:31.527 [2024-07-10 12:34:40.975773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.527 [2024-07-10 12:34:40.975843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:31.527 [2024-07-10 12:34:40.975855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:31.527 [2024-07-10 12:34:40.975866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:31.527 [2024-07-10 12:34:40.975876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.527 [2024-07-10 12:34:40.975913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:31.527 [2024-07-10 12:34:40.975931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:31.527 [2024-07-10 12:34:40.975942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:31.527 [2024-07-10 12:34:40.975952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.527 [2024-07-10 12:34:40.976251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:31.527 [2024-07-10 12:34:40.976268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:31.527 [2024-07-10 12:34:40.976280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:31.527 [2024-07-10 12:34:40.976290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.527 [2024-07-10 12:34:40.976329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:31.527 [2024-07-10 12:34:40.976341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:31.527 [2024-07-10 12:34:40.976356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:31.527 [2024-07-10 12:34:40.976366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.527 [2024-07-10 12:34:40.976404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:31.527 [2024-07-10 12:34:40.976415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:31.527 [2024-07-10 12:34:40.976425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:31.527 [2024-07-10 12:34:40.976435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.527 [2024-07-10 12:34:40.976480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:31.527 [2024-07-10 12:34:40.976494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:31.527 [2024-07-10 12:34:40.976505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:31.527 [2024-07-10 12:34:40.976514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:31.527 [2024-07-10 12:34:40.976631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 667.833 ms, result 0 00:32:33.433 00:32:33.433 00:32:33.433 12:34:42 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:33.433 [2024-07-10 12:34:42.761047] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:32:33.433 [2024-07-10 12:34:42.761179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83364 ] 00:32:33.692 [2024-07-10 12:34:42.930618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.951 [2024-07-10 12:34:43.174988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.209 [2024-07-10 12:34:43.589327] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:34.209 [2024-07-10 12:34:43.589400] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:34.468 [2024-07-10 12:34:43.750419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.468 [2024-07-10 12:34:43.750480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:34.468 [2024-07-10 12:34:43.750499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:34.468 [2024-07-10 12:34:43.750509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.468 [2024-07-10 12:34:43.750569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.468 [2024-07-10 12:34:43.750582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:34.468 [2024-07-10 12:34:43.750594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:34.468 [2024-07-10 12:34:43.750607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.468 [2024-07-10 12:34:43.750628] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:34.468 [2024-07-10 12:34:43.751674] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:34.468 [2024-07-10 12:34:43.751704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.468 [2024-07-10 12:34:43.751719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:34.468 [2024-07-10 12:34:43.751743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:32:34.468 [2024-07-10 12:34:43.751754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.468 [2024-07-10 12:34:43.753214] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:34.468 [2024-07-10 12:34:43.771948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.468 [2024-07-10 12:34:43.771993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:34.468 [2024-07-10 12:34:43.772009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.764 ms 00:32:34.468 [2024-07-10 12:34:43.772020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.468 [2024-07-10 12:34:43.772095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.468 [2024-07-10 12:34:43.772108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:34.468 [2024-07-10 12:34:43.772123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:34.469 [2024-07-10 12:34:43.772134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.469 [2024-07-10 12:34:43.778898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.469 [2024-07-10 12:34:43.778928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:34.469 [2024-07-10 12:34:43.778941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.703 ms 00:32:34.469 [2024-07-10 12:34:43.778951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.469 [2024-07-10 12:34:43.779035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.469 [2024-07-10 12:34:43.779052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:34.469 [2024-07-10 12:34:43.779063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:32:34.469 [2024-07-10 12:34:43.779073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.469 [2024-07-10 12:34:43.779118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.469 [2024-07-10 12:34:43.779131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:34.469 [2024-07-10 12:34:43.779141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:34.469 [2024-07-10 12:34:43.779152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.469 [2024-07-10 12:34:43.779178] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:34.469 [2024-07-10 12:34:43.784547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.469 [2024-07-10 12:34:43.784578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:34.469 [2024-07-10 12:34:43.784591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.384 ms 00:32:34.469 [2024-07-10 12:34:43.784601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.469 [2024-07-10 12:34:43.784637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.469 [2024-07-10 12:34:43.784648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:34.469 [2024-07-10 12:34:43.784659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:34.469 [2024-07-10 12:34:43.784669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.469 [2024-07-10 12:34:43.784720] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:34.469 [2024-07-10 12:34:43.784757] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:34.469 [2024-07-10 12:34:43.784802] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:34.469 [2024-07-10 12:34:43.784823] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:32:34.469 [2024-07-10 12:34:43.784908] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:34.469 [2024-07-10 12:34:43.784921] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:34.469 [2024-07-10 12:34:43.784935] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:32:34.469 [2024-07-10 12:34:43.784949] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:34.469 [2024-07-10 12:34:43.784962] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:34.469 [2024-07-10 12:34:43.784974] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:34.469 [2024-07-10 12:34:43.784984] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:34.469 [2024-07-10 12:34:43.784994] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:34.469 [2024-07-10 12:34:43.785004] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:34.469 [2024-07-10 12:34:43.785015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.469 [2024-07-10 12:34:43.785029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:34.469 [2024-07-10 12:34:43.785040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:32:34.469 [2024-07-10 12:34:43.785049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.469 [2024-07-10 12:34:43.785117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.469 [2024-07-10 12:34:43.785128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:34.469 [2024-07-10 12:34:43.785139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:34.469 [2024-07-10 12:34:43.785149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.469 [2024-07-10 12:34:43.785232] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:34.469 [2024-07-10 12:34:43.785245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:34.469 [2024-07-10 12:34:43.785259] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:34.469 [2024-07-10 12:34:43.785270] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.469 [2024-07-10 12:34:43.785281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:34.469 [2024-07-10 12:34:43.785291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:34.469 [2024-07-10 12:34:43.785300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:34.469 [2024-07-10 12:34:43.785311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:34.469 [2024-07-10 12:34:43.785321] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:34.469 [2024-07-10 12:34:43.785330] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:34.469 [2024-07-10 12:34:43.785343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:34.469 [2024-07-10 12:34:43.785352] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:34.469 [2024-07-10 12:34:43.785362] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:34.469 [2024-07-10 12:34:43.785371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:34.469 [2024-07-10 12:34:43.785381] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:34.469 [2024-07-10 12:34:43.785391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.469 [2024-07-10 12:34:43.785400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:34.469 [2024-07-10 12:34:43.785410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:34.469 [2024-07-10 12:34:43.785420] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.469 [2024-07-10 12:34:43.785429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:34.469 [2024-07-10 12:34:43.785451] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:34.469 [2024-07-10 12:34:43.785461] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:34.469 [2024-07-10 12:34:43.785470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:34.469 [2024-07-10 12:34:43.785480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:34.469 [2024-07-10 12:34:43.785490] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:34.469 [2024-07-10 12:34:43.785499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:34.469 [2024-07-10 12:34:43.785508] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:34.469 [2024-07-10 12:34:43.785517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:34.469 [2024-07-10 12:34:43.785527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:34.469 [2024-07-10 12:34:43.785536] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:34.469 [2024-07-10 12:34:43.785545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:34.469 [2024-07-10 12:34:43.785554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:34.469 [2024-07-10 12:34:43.785564] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:34.469 [2024-07-10 12:34:43.785573] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:34.469 [2024-07-10 12:34:43.785584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:34.469 [2024-07-10 12:34:43.785594] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:34.469 [2024-07-10 12:34:43.785603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:34.469 [2024-07-10 12:34:43.785613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:34.469 [2024-07-10 12:34:43.785623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:34.469 [2024-07-10 12:34:43.785632] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.469 [2024-07-10 12:34:43.785642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:34.469 [2024-07-10 12:34:43.785651] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:34.469 [2024-07-10 12:34:43.785662] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.469 [2024-07-10 12:34:43.785671] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:34.469 [2024-07-10 12:34:43.785682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:34.469 [2024-07-10 12:34:43.785691] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:34.469 [2024-07-10 12:34:43.785701] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.469 [2024-07-10 12:34:43.785711] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:34.469 [2024-07-10 12:34:43.785721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:34.469 [2024-07-10 12:34:43.785741] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:34.469 [2024-07-10 12:34:43.785752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:34.469 [2024-07-10 12:34:43.785761] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:34.469 [2024-07-10 12:34:43.785771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:34.469 [2024-07-10 12:34:43.785782] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:34.469 [2024-07-10 12:34:43.785795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:34.469 [2024-07-10 12:34:43.785806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:34.469 [2024-07-10 12:34:43.785817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:34.469 [2024-07-10 12:34:43.785828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:34.469 [2024-07-10 12:34:43.785838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:34.469 [2024-07-10 12:34:43.785848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:34.470 [2024-07-10 12:34:43.785859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:34.470 [2024-07-10 12:34:43.785869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:34.470 [2024-07-10 12:34:43.785879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:34.470 [2024-07-10 12:34:43.785888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:34.470 [2024-07-10 12:34:43.785898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:34.470 [2024-07-10 12:34:43.785908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:34.470 [2024-07-10 12:34:43.785918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:34.470 [2024-07-10 12:34:43.785929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:34.470 [2024-07-10 12:34:43.785939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:34.470 [2024-07-10 12:34:43.785949] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:34.470 [2024-07-10 12:34:43.785960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:34.470 [2024-07-10 12:34:43.785971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:34.470 [2024-07-10 12:34:43.785981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:34.470 [2024-07-10 12:34:43.785992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:34.470 [2024-07-10 12:34:43.786003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:34.470 [2024-07-10 12:34:43.786015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.470 [2024-07-10 12:34:43.786028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:34.470 [2024-07-10 12:34:43.786039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:32:34.470 [2024-07-10 12:34:43.786048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.470 [2024-07-10 12:34:43.839456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.470 [2024-07-10 12:34:43.839522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:34.470 [2024-07-10 12:34:43.839540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.444 ms 00:32:34.470 [2024-07-10 12:34:43.839551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.470 [2024-07-10 12:34:43.839655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.470 [2024-07-10 12:34:43.839667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:34.470 [2024-07-10 12:34:43.839679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:32:34.470 [2024-07-10 12:34:43.839690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.470 [2024-07-10 12:34:43.891951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.470 [2024-07-10 12:34:43.892008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:34.470 [2024-07-10 12:34:43.892024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.250 ms 00:32:34.470 [2024-07-10 12:34:43.892035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.470 [2024-07-10 12:34:43.892103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.470 [2024-07-10 12:34:43.892116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:34.470 [2024-07-10 12:34:43.892127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:34.470 [2024-07-10 12:34:43.892137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.470 [2024-07-10 12:34:43.892619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.470 [2024-07-10 12:34:43.892634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:34.470 [2024-07-10 12:34:43.892646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:32:34.470 [2024-07-10 12:34:43.892656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.470 [2024-07-10 12:34:43.892795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.470 [2024-07-10 12:34:43.892810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:34.470 [2024-07-10 12:34:43.892821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:32:34.470 [2024-07-10 12:34:43.892831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.470 [2024-07-10 12:34:43.914011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.470 [2024-07-10 12:34:43.914057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:34.470 [2024-07-10 12:34:43.914072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.190 ms 00:32:34.470 [2024-07-10 12:34:43.914083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.470 [2024-07-10 12:34:43.934792] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:34.470 [2024-07-10 12:34:43.934838] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:34.470 [2024-07-10 12:34:43.934854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.470 [2024-07-10 12:34:43.934865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:34.470 [2024-07-10 12:34:43.934877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.672 ms 00:32:34.470 [2024-07-10 12:34:43.934887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.754 [2024-07-10 12:34:43.965448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.754 [2024-07-10 12:34:43.965499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:34.754 [2024-07-10 12:34:43.965515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.564 ms 00:32:34.754 [2024-07-10 12:34:43.965543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.754 [2024-07-10 12:34:43.985357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.754 [2024-07-10 12:34:43.985399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:34.754 [2024-07-10 12:34:43.985414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.794 ms 00:32:34.754 [2024-07-10 12:34:43.985424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.754 [2024-07-10 12:34:44.004859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.754 [2024-07-10 12:34:44.004896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:34.754 [2024-07-10 12:34:44.004909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.425 ms 00:32:34.754 [2024-07-10 12:34:44.004919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.754 [2024-07-10 12:34:44.005774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.754 [2024-07-10 12:34:44.005797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:34.754 [2024-07-10 12:34:44.005810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:32:34.754 [2024-07-10 12:34:44.005820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.754 [2024-07-10 12:34:44.092097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.754 [2024-07-10 12:34:44.092197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:34.754 [2024-07-10 12:34:44.092216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.386 ms 00:32:34.754 [2024-07-10 12:34:44.092227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.754 [2024-07-10 12:34:44.104189] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:34.754 [2024-07-10 12:34:44.107235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.754 [2024-07-10 12:34:44.107264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:34.754 [2024-07-10 12:34:44.107280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.962 ms 00:32:34.754 [2024-07-10 12:34:44.107290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.754 [2024-07-10 12:34:44.107391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.754 [2024-07-10 12:34:44.107405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:34.754 [2024-07-10 12:34:44.107417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:34.754 [2024-07-10 12:34:44.107427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.754 [2024-07-10 12:34:44.108917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.754 [2024-07-10 12:34:44.108957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:34.754 [2024-07-10 12:34:44.108970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.451 ms 00:32:34.754 [2024-07-10 12:34:44.108979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.754 [2024-07-10 12:34:44.109011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.754 [2024-07-10 12:34:44.109023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:34.754 [2024-07-10 12:34:44.109034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:34.754 [2024-07-10 12:34:44.109043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.754 [2024-07-10 12:34:44.109083] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:34.754 [2024-07-10 12:34:44.109096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.754 [2024-07-10 12:34:44.109106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:34.754 [2024-07-10 12:34:44.109119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:34.754 [2024-07-10 12:34:44.109129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.754 [2024-07-10 12:34:44.146707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.754 [2024-07-10 12:34:44.146759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:34.754 [2024-07-10 12:34:44.146774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.618 ms 00:32:34.754 [2024-07-10 12:34:44.146785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.754 [2024-07-10 12:34:44.146858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.754 [2024-07-10 12:34:44.146881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:34.754 [2024-07-10 12:34:44.146892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:34.754 [2024-07-10 12:34:44.146902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.754 [2024-07-10 12:34:44.152317] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.693 ms, result 0 00:33:09.123  Copying: 26/1024 [MB] (26 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 82/1024 [MB] (28 MBps) Copying: 111/1024 [MB] (29 MBps) Copying: 139/1024 [MB] (28 MBps) Copying: 169/1024 [MB] (30 MBps) Copying: 199/1024 [MB] (29 MBps) Copying: 228/1024 [MB] (29 MBps) Copying: 258/1024 [MB] (29 MBps) Copying: 289/1024 [MB] (30 MBps) Copying: 319/1024 [MB] (30 MBps) Copying: 350/1024 [MB] (30 MBps) Copying: 381/1024 [MB] (31 MBps) Copying: 410/1024 [MB] (29 MBps) Copying: 439/1024 [MB] (28 MBps) Copying: 471/1024 [MB] (31 MBps) Copying: 503/1024 [MB] (31 MBps) Copying: 534/1024 [MB] (31 MBps) Copying: 565/1024 [MB] (31 MBps) Copying: 596/1024 [MB] (30 MBps) Copying: 626/1024 [MB] (30 MBps) Copying: 657/1024 [MB] (30 MBps) Copying: 687/1024 [MB] (30 MBps) Copying: 720/1024 [MB] (32 MBps) Copying: 750/1024 [MB] (30 MBps) Copying: 781/1024 [MB] (30 MBps) Copying: 814/1024 [MB] (32 MBps) Copying: 845/1024 [MB] (31 MBps) Copying: 875/1024 [MB] (29 MBps) Copying: 905/1024 [MB] (30 MBps) Copying: 935/1024 [MB] (29 MBps) Copying: 965/1024 [MB] (30 MBps) Copying: 997/1024 [MB] (32 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-10 12:35:18.537150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.123 [2024-07-10 12:35:18.537236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:09.123 [2024-07-10 12:35:18.537255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:09.123 [2024-07-10 12:35:18.537284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.123 [2024-07-10 12:35:18.537315] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:09.123 [2024-07-10 12:35:18.541973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.123 [2024-07-10 12:35:18.542020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:09.123 [2024-07-10 12:35:18.542035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.642 ms 00:33:09.123 [2024-07-10 12:35:18.542046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.123 [2024-07-10 12:35:18.542269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.123 [2024-07-10 12:35:18.542285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:09.123 [2024-07-10 12:35:18.542297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:33:09.123 [2024-07-10 12:35:18.542308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.123 [2024-07-10 12:35:18.547655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.123 [2024-07-10 12:35:18.547716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:09.123 [2024-07-10 12:35:18.547772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.335 ms 00:33:09.123 [2024-07-10 12:35:18.547783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.123 [2024-07-10 12:35:18.553462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.123 [2024-07-10 12:35:18.553505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:09.123 [2024-07-10 12:35:18.553519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.631 ms 00:33:09.123 [2024-07-10 12:35:18.553530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.123 [2024-07-10 12:35:18.594189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.123 [2024-07-10 12:35:18.594245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:09.123 [2024-07-10 12:35:18.594262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.663 ms 00:33:09.123 [2024-07-10 12:35:18.594273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.382 [2024-07-10 12:35:18.616118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.382 [2024-07-10 12:35:18.616169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:09.382 [2024-07-10 12:35:18.616188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.830 ms 00:33:09.382 [2024-07-10 12:35:18.616206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.382 [2024-07-10 12:35:18.753998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.382 [2024-07-10 12:35:18.754076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:09.382 [2024-07-10 12:35:18.754097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 137.959 ms 00:33:09.382 [2024-07-10 12:35:18.754108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.383 [2024-07-10 12:35:18.791622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.383 [2024-07-10 12:35:18.791682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:33:09.383 [2024-07-10 12:35:18.791700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.551 ms 00:33:09.383 [2024-07-10 12:35:18.791711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.383 [2024-07-10 12:35:18.832220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.383 [2024-07-10 12:35:18.832274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:33:09.383 [2024-07-10 12:35:18.832292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.518 ms 00:33:09.383 [2024-07-10 12:35:18.832302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.643 [2024-07-10 12:35:18.870832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.643 [2024-07-10 12:35:18.870889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:09.643 [2024-07-10 12:35:18.870907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.543 ms 00:33:09.643 [2024-07-10 12:35:18.870933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.643 [2024-07-10 12:35:18.910409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.643 [2024-07-10 12:35:18.910472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:09.643 [2024-07-10 12:35:18.910492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.442 ms 00:33:09.643 [2024-07-10 12:35:18.910503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.643 [2024-07-10 12:35:18.910554] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:09.643 [2024-07-10 12:35:18.910575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:33:09.643 [2024-07-10 12:35:18.910589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.910984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:09.643 [2024-07-10 12:35:18.911712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:09.644 [2024-07-10 12:35:18.911952] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:09.644 [2024-07-10 12:35:18.911967] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1755b5ad-d915-4770-b62e-8f6a41c87fae 00:33:09.644 [2024-07-10 12:35:18.911982] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:33:09.644 [2024-07-10 12:35:18.911995] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32448 00:33:09.644 [2024-07-10 12:35:18.912007] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 31488 00:33:09.644 [2024-07-10 12:35:18.912021] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0305 00:33:09.644 [2024-07-10 12:35:18.912034] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:09.644 [2024-07-10 12:35:18.912053] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:09.644 [2024-07-10 12:35:18.912068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:09.644 [2024-07-10 12:35:18.912083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:09.644 [2024-07-10 12:35:18.912107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:09.644 [2024-07-10 12:35:18.912121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.644 [2024-07-10 12:35:18.912151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:09.644 [2024-07-10 12:35:18.912172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.571 ms 00:33:09.644 [2024-07-10 12:35:18.912185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.644 [2024-07-10 12:35:18.932409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.644 [2024-07-10 12:35:18.932465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:09.644 [2024-07-10 12:35:18.932481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.203 ms 00:33:09.644 [2024-07-10 12:35:18.932508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.644 [2024-07-10 12:35:18.933043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.644 [2024-07-10 12:35:18.933058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:09.644 [2024-07-10 12:35:18.933069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:33:09.644 [2024-07-10 12:35:18.933079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.644 [2024-07-10 12:35:18.977555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:09.644 [2024-07-10 12:35:18.977623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:09.644 [2024-07-10 12:35:18.977638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:09.644 [2024-07-10 12:35:18.977649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.644 [2024-07-10 12:35:18.977726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:09.644 [2024-07-10 12:35:18.977754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:09.644 [2024-07-10 12:35:18.977766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:09.644 [2024-07-10 12:35:18.977776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.644 [2024-07-10 12:35:18.977855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:09.644 [2024-07-10 12:35:18.977869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:09.644 [2024-07-10 12:35:18.977880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:09.644 [2024-07-10 12:35:18.977890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.644 [2024-07-10 12:35:18.977913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:09.644 [2024-07-10 12:35:18.977924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:09.644 [2024-07-10 12:35:18.977935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:09.644 [2024-07-10 12:35:18.977946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.644 [2024-07-10 12:35:19.095668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:09.644 [2024-07-10 12:35:19.095759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:09.644 [2024-07-10 12:35:19.095778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:09.644 [2024-07-10 12:35:19.095790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.903 [2024-07-10 12:35:19.201966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:09.904 [2024-07-10 12:35:19.202044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:09.904 [2024-07-10 12:35:19.202061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:09.904 [2024-07-10 12:35:19.202072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.904 [2024-07-10 12:35:19.202145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:09.904 [2024-07-10 12:35:19.202157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:09.904 [2024-07-10 12:35:19.202168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:09.904 [2024-07-10 12:35:19.202179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.904 [2024-07-10 12:35:19.202218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:09.904 [2024-07-10 12:35:19.202229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:09.904 [2024-07-10 12:35:19.202246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:09.904 [2024-07-10 12:35:19.202256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.904 [2024-07-10 12:35:19.202381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:09.904 [2024-07-10 12:35:19.202394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:09.904 [2024-07-10 12:35:19.202405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:09.904 [2024-07-10 12:35:19.202415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.904 [2024-07-10 12:35:19.202456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:09.904 [2024-07-10 12:35:19.202467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:09.904 [2024-07-10 12:35:19.202482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:09.904 [2024-07-10 12:35:19.202493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.904 [2024-07-10 12:35:19.202534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:09.904 [2024-07-10 12:35:19.202545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:09.904 [2024-07-10 12:35:19.202556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:09.904 [2024-07-10 12:35:19.202566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.904 [2024-07-10 12:35:19.202611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:09.904 [2024-07-10 12:35:19.202623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:09.904 [2024-07-10 12:35:19.202637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:09.904 [2024-07-10 12:35:19.202646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.904 [2024-07-10 12:35:19.202789] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 666.813 ms, result 0 00:33:11.281 00:33:11.281 00:33:11.281 12:35:20 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:13.187 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:13.187 12:35:22 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:13.187 12:35:22 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:33:13.187 12:35:22 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:13.187 12:35:22 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:13.187 12:35:22 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:13.187 12:35:22 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 81863 00:33:13.187 12:35:22 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 81863 ']' 00:33:13.187 12:35:22 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 81863 00:33:13.187 Process with pid 81863 is not found 00:33:13.187 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (81863) - No such process 00:33:13.187 12:35:22 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 81863 is not found' 00:33:13.187 12:35:22 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:33:13.187 12:35:22 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:13.187 Remove shared memory files 00:33:13.187 12:35:22 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:33:13.187 12:35:22 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:33:13.187 12:35:22 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:33:13.187 12:35:22 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:13.187 12:35:22 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:33:13.187 ************************************ 00:33:13.187 END TEST ftl_restore 00:33:13.187 ************************************ 00:33:13.187 00:33:13.187 real 3m4.567s 00:33:13.187 user 2m52.710s 00:33:13.187 sys 0m13.218s 00:33:13.187 12:35:22 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:13.187 12:35:22 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:33:13.187 12:35:22 ftl -- common/autotest_common.sh@1142 -- # return 0 00:33:13.187 12:35:22 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:33:13.187 12:35:22 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:33:13.187 12:35:22 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:13.187 12:35:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:13.187 ************************************ 00:33:13.187 START TEST ftl_dirty_shutdown 00:33:13.187 ************************************ 00:33:13.187 12:35:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:33:13.187 * Looking for test storage... 00:33:13.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:33:13.188 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:33:13.447 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:33:13.447 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:33:13.447 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:33:13.447 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:33:13.447 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=83822 00:33:13.447 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 83822 00:33:13.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.447 12:35:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 83822 ']' 00:33:13.447 12:35:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.447 12:35:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:13.447 12:35:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.447 12:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:33:13.447 12:35:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:13.447 12:35:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:13.447 [2024-07-10 12:35:22.773874] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:33:13.447 [2024-07-10 12:35:22.774022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83822 ] 00:33:13.714 [2024-07-10 12:35:22.947562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.715 [2024-07-10 12:35:23.192256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.670 12:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:14.670 12:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:33:14.670 12:35:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:33:14.670 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:33:14.670 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:33:14.670 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:33:14.670 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:33:14.670 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:15.238 { 00:33:15.238 "name": "nvme0n1", 00:33:15.238 "aliases": [ 00:33:15.238 "e091792f-27b9-4bbe-9271-40f73dfcbe99" 00:33:15.238 ], 00:33:15.238 "product_name": "NVMe disk", 00:33:15.238 "block_size": 4096, 00:33:15.238 "num_blocks": 1310720, 00:33:15.238 "uuid": "e091792f-27b9-4bbe-9271-40f73dfcbe99", 00:33:15.238 "assigned_rate_limits": { 00:33:15.238 "rw_ios_per_sec": 0, 00:33:15.238 "rw_mbytes_per_sec": 0, 00:33:15.238 "r_mbytes_per_sec": 0, 00:33:15.238 "w_mbytes_per_sec": 0 00:33:15.238 }, 00:33:15.238 "claimed": true, 00:33:15.238 "claim_type": "read_many_write_one", 00:33:15.238 "zoned": false, 00:33:15.238 "supported_io_types": { 00:33:15.238 "read": true, 00:33:15.238 "write": true, 00:33:15.238 "unmap": true, 00:33:15.238 "flush": true, 00:33:15.238 "reset": true, 00:33:15.238 "nvme_admin": true, 00:33:15.238 "nvme_io": true, 00:33:15.238 "nvme_io_md": false, 00:33:15.238 "write_zeroes": true, 00:33:15.238 "zcopy": false, 00:33:15.238 "get_zone_info": false, 00:33:15.238 "zone_management": false, 00:33:15.238 "zone_append": false, 00:33:15.238 "compare": true, 00:33:15.238 "compare_and_write": false, 00:33:15.238 "abort": true, 00:33:15.238 "seek_hole": false, 00:33:15.238 "seek_data": false, 00:33:15.238 "copy": true, 00:33:15.238 "nvme_iov_md": false 00:33:15.238 }, 00:33:15.238 "driver_specific": { 00:33:15.238 "nvme": [ 00:33:15.238 { 00:33:15.238 "pci_address": "0000:00:11.0", 00:33:15.238 "trid": { 00:33:15.238 "trtype": "PCIe", 00:33:15.238 "traddr": "0000:00:11.0" 00:33:15.238 }, 00:33:15.238 "ctrlr_data": { 00:33:15.238 "cntlid": 0, 00:33:15.238 "vendor_id": "0x1b36", 00:33:15.238 "model_number": "QEMU NVMe Ctrl", 00:33:15.238 "serial_number": "12341", 00:33:15.238 "firmware_revision": "8.0.0", 00:33:15.238 "subnqn": "nqn.2019-08.org.qemu:12341", 00:33:15.238 "oacs": { 00:33:15.238 "security": 0, 00:33:15.238 "format": 1, 00:33:15.238 "firmware": 0, 00:33:15.238 "ns_manage": 1 00:33:15.238 }, 00:33:15.238 "multi_ctrlr": false, 00:33:15.238 "ana_reporting": false 00:33:15.238 }, 00:33:15.238 "vs": { 00:33:15.238 "nvme_version": "1.4" 00:33:15.238 }, 00:33:15.238 "ns_data": { 00:33:15.238 "id": 1, 00:33:15.238 "can_share": false 00:33:15.238 } 00:33:15.238 } 00:33:15.238 ], 00:33:15.238 "mp_policy": "active_passive" 00:33:15.238 } 00:33:15.238 } 00:33:15.238 ]' 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:15.238 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:15.497 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=db2d9183-054a-4dab-8d5a-78e0f2ba64dc 00:33:15.497 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:33:15.497 12:35:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db2d9183-054a-4dab-8d5a-78e0f2ba64dc 00:33:15.756 12:35:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:33:16.015 12:35:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=82721e7a-8dd4-414f-848a-2b8510d4dc58 00:33:16.015 12:35:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 82721e7a-8dd4-414f-848a-2b8510d4dc58 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=4c9828d8-d65b-4b28-aec4-65d62021bb80 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4c9828d8-d65b-4b28-aec4-65d62021bb80 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=4c9828d8-d65b-4b28-aec4-65d62021bb80 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4c9828d8-d65b-4b28-aec4-65d62021bb80 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=4c9828d8-d65b-4b28-aec4-65d62021bb80 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c9828d8-d65b-4b28-aec4-65d62021bb80 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:16.275 { 00:33:16.275 "name": "4c9828d8-d65b-4b28-aec4-65d62021bb80", 00:33:16.275 "aliases": [ 00:33:16.275 "lvs/nvme0n1p0" 00:33:16.275 ], 00:33:16.275 "product_name": "Logical Volume", 00:33:16.275 "block_size": 4096, 00:33:16.275 "num_blocks": 26476544, 00:33:16.275 "uuid": "4c9828d8-d65b-4b28-aec4-65d62021bb80", 00:33:16.275 "assigned_rate_limits": { 00:33:16.275 "rw_ios_per_sec": 0, 00:33:16.275 "rw_mbytes_per_sec": 0, 00:33:16.275 "r_mbytes_per_sec": 0, 00:33:16.275 "w_mbytes_per_sec": 0 00:33:16.275 }, 00:33:16.275 "claimed": false, 00:33:16.275 "zoned": false, 00:33:16.275 "supported_io_types": { 00:33:16.275 "read": true, 00:33:16.275 "write": true, 00:33:16.275 "unmap": true, 00:33:16.275 "flush": false, 00:33:16.275 "reset": true, 00:33:16.275 "nvme_admin": false, 00:33:16.275 "nvme_io": false, 00:33:16.275 "nvme_io_md": false, 00:33:16.275 "write_zeroes": true, 00:33:16.275 "zcopy": false, 00:33:16.275 "get_zone_info": false, 00:33:16.275 "zone_management": false, 00:33:16.275 "zone_append": false, 00:33:16.275 "compare": false, 00:33:16.275 "compare_and_write": false, 00:33:16.275 "abort": false, 00:33:16.275 "seek_hole": true, 00:33:16.275 "seek_data": true, 00:33:16.275 "copy": false, 00:33:16.275 "nvme_iov_md": false 00:33:16.275 }, 00:33:16.275 "driver_specific": { 00:33:16.275 "lvol": { 00:33:16.275 "lvol_store_uuid": "82721e7a-8dd4-414f-848a-2b8510d4dc58", 00:33:16.275 "base_bdev": "nvme0n1", 00:33:16.275 "thin_provision": true, 00:33:16.275 "num_allocated_clusters": 0, 00:33:16.275 "snapshot": false, 00:33:16.275 "clone": false, 00:33:16.275 "esnap_clone": false 00:33:16.275 } 00:33:16.275 } 00:33:16.275 } 00:33:16.275 ]' 00:33:16.275 12:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:16.534 12:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:33:16.534 12:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:16.534 12:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:33:16.534 12:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:33:16.534 12:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:33:16.534 12:35:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:33:16.534 12:35:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:33:16.534 12:35:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:33:16.793 12:35:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:33:16.793 12:35:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:33:16.793 12:35:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 4c9828d8-d65b-4b28-aec4-65d62021bb80 00:33:16.793 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=4c9828d8-d65b-4b28-aec4-65d62021bb80 00:33:16.793 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:16.793 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:33:16.793 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:33:16.793 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c9828d8-d65b-4b28-aec4-65d62021bb80 00:33:17.052 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:17.052 { 00:33:17.052 "name": "4c9828d8-d65b-4b28-aec4-65d62021bb80", 00:33:17.052 "aliases": [ 00:33:17.052 "lvs/nvme0n1p0" 00:33:17.052 ], 00:33:17.052 "product_name": "Logical Volume", 00:33:17.052 "block_size": 4096, 00:33:17.052 "num_blocks": 26476544, 00:33:17.052 "uuid": "4c9828d8-d65b-4b28-aec4-65d62021bb80", 00:33:17.052 "assigned_rate_limits": { 00:33:17.052 "rw_ios_per_sec": 0, 00:33:17.052 "rw_mbytes_per_sec": 0, 00:33:17.052 "r_mbytes_per_sec": 0, 00:33:17.052 "w_mbytes_per_sec": 0 00:33:17.052 }, 00:33:17.052 "claimed": false, 00:33:17.052 "zoned": false, 00:33:17.052 "supported_io_types": { 00:33:17.052 "read": true, 00:33:17.052 "write": true, 00:33:17.052 "unmap": true, 00:33:17.052 "flush": false, 00:33:17.052 "reset": true, 00:33:17.052 "nvme_admin": false, 00:33:17.052 "nvme_io": false, 00:33:17.052 "nvme_io_md": false, 00:33:17.052 "write_zeroes": true, 00:33:17.052 "zcopy": false, 00:33:17.052 "get_zone_info": false, 00:33:17.052 "zone_management": false, 00:33:17.052 "zone_append": false, 00:33:17.052 "compare": false, 00:33:17.052 "compare_and_write": false, 00:33:17.052 "abort": false, 00:33:17.052 "seek_hole": true, 00:33:17.052 "seek_data": true, 00:33:17.052 "copy": false, 00:33:17.052 "nvme_iov_md": false 00:33:17.052 }, 00:33:17.052 "driver_specific": { 00:33:17.052 "lvol": { 00:33:17.052 "lvol_store_uuid": "82721e7a-8dd4-414f-848a-2b8510d4dc58", 00:33:17.052 "base_bdev": "nvme0n1", 00:33:17.052 "thin_provision": true, 00:33:17.052 "num_allocated_clusters": 0, 00:33:17.052 "snapshot": false, 00:33:17.052 "clone": false, 00:33:17.052 "esnap_clone": false 00:33:17.052 } 00:33:17.052 } 00:33:17.052 } 00:33:17.052 ]' 00:33:17.052 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:17.052 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:33:17.052 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:17.052 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:33:17.052 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:33:17.052 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:33:17.052 12:35:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:33:17.052 12:35:26 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:33:17.052 12:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:33:17.311 12:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 4c9828d8-d65b-4b28-aec4-65d62021bb80 00:33:17.311 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=4c9828d8-d65b-4b28-aec4-65d62021bb80 00:33:17.311 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:17.311 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:33:17.311 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:33:17.311 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c9828d8-d65b-4b28-aec4-65d62021bb80 00:33:17.311 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:17.311 { 00:33:17.311 "name": "4c9828d8-d65b-4b28-aec4-65d62021bb80", 00:33:17.311 "aliases": [ 00:33:17.311 "lvs/nvme0n1p0" 00:33:17.311 ], 00:33:17.311 "product_name": "Logical Volume", 00:33:17.311 "block_size": 4096, 00:33:17.311 "num_blocks": 26476544, 00:33:17.311 "uuid": "4c9828d8-d65b-4b28-aec4-65d62021bb80", 00:33:17.311 "assigned_rate_limits": { 00:33:17.311 "rw_ios_per_sec": 0, 00:33:17.311 "rw_mbytes_per_sec": 0, 00:33:17.311 "r_mbytes_per_sec": 0, 00:33:17.311 "w_mbytes_per_sec": 0 00:33:17.311 }, 00:33:17.311 "claimed": false, 00:33:17.311 "zoned": false, 00:33:17.311 "supported_io_types": { 00:33:17.311 "read": true, 00:33:17.311 "write": true, 00:33:17.312 "unmap": true, 00:33:17.312 "flush": false, 00:33:17.312 "reset": true, 00:33:17.312 "nvme_admin": false, 00:33:17.312 "nvme_io": false, 00:33:17.312 "nvme_io_md": false, 00:33:17.312 "write_zeroes": true, 00:33:17.312 "zcopy": false, 00:33:17.312 "get_zone_info": false, 00:33:17.312 "zone_management": false, 00:33:17.312 "zone_append": false, 00:33:17.312 "compare": false, 00:33:17.312 "compare_and_write": false, 00:33:17.312 "abort": false, 00:33:17.312 "seek_hole": true, 00:33:17.312 "seek_data": true, 00:33:17.312 "copy": false, 00:33:17.312 "nvme_iov_md": false 00:33:17.312 }, 00:33:17.312 "driver_specific": { 00:33:17.312 "lvol": { 00:33:17.312 "lvol_store_uuid": "82721e7a-8dd4-414f-848a-2b8510d4dc58", 00:33:17.312 "base_bdev": "nvme0n1", 00:33:17.312 "thin_provision": true, 00:33:17.312 "num_allocated_clusters": 0, 00:33:17.312 "snapshot": false, 00:33:17.312 "clone": false, 00:33:17.312 "esnap_clone": false 00:33:17.312 } 00:33:17.312 } 00:33:17.312 } 00:33:17.312 ]' 00:33:17.312 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:17.312 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:33:17.312 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:17.572 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:33:17.572 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:33:17.572 12:35:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:33:17.572 12:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:33:17.572 12:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4c9828d8-d65b-4b28-aec4-65d62021bb80 --l2p_dram_limit 10' 00:33:17.572 12:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:33:17.572 12:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:33:17.572 12:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:33:17.572 12:35:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4c9828d8-d65b-4b28-aec4-65d62021bb80 --l2p_dram_limit 10 -c nvc0n1p0 00:33:17.572 [2024-07-10 12:35:27.006491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.572 [2024-07-10 12:35:27.006568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:17.572 [2024-07-10 12:35:27.006585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:17.572 [2024-07-10 12:35:27.006600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.572 [2024-07-10 12:35:27.006673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.572 [2024-07-10 12:35:27.006689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:17.572 [2024-07-10 12:35:27.006700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:33:17.572 [2024-07-10 12:35:27.006713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.572 [2024-07-10 12:35:27.006763] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:17.572 [2024-07-10 12:35:27.007886] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:17.572 [2024-07-10 12:35:27.007915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.572 [2024-07-10 12:35:27.007933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:17.572 [2024-07-10 12:35:27.007946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.160 ms 00:33:17.572 [2024-07-10 12:35:27.007959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.572 [2024-07-10 12:35:27.008036] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID af10987c-0e85-4376-ab8d-dcd11810374e 00:33:17.572 [2024-07-10 12:35:27.010440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.572 [2024-07-10 12:35:27.010476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:33:17.572 [2024-07-10 12:35:27.010491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:17.572 [2024-07-10 12:35:27.010502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.572 [2024-07-10 12:35:27.023190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.572 [2024-07-10 12:35:27.023229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:17.572 [2024-07-10 12:35:27.023250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.640 ms 00:33:17.572 [2024-07-10 12:35:27.023260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.572 [2024-07-10 12:35:27.023379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.572 [2024-07-10 12:35:27.023394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:17.572 [2024-07-10 12:35:27.023408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:33:17.572 [2024-07-10 12:35:27.023419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.572 [2024-07-10 12:35:27.023498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.572 [2024-07-10 12:35:27.023511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:17.572 [2024-07-10 12:35:27.023524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:33:17.572 [2024-07-10 12:35:27.023537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.572 [2024-07-10 12:35:27.023566] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:17.572 [2024-07-10 12:35:27.029501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.572 [2024-07-10 12:35:27.029540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:17.573 [2024-07-10 12:35:27.029554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.954 ms 00:33:17.573 [2024-07-10 12:35:27.029569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.573 [2024-07-10 12:35:27.029611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.573 [2024-07-10 12:35:27.029624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:17.573 [2024-07-10 12:35:27.029636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:17.573 [2024-07-10 12:35:27.029649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.573 [2024-07-10 12:35:27.029684] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:33:17.573 [2024-07-10 12:35:27.029842] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:17.573 [2024-07-10 12:35:27.029867] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:17.573 [2024-07-10 12:35:27.029888] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:33:17.573 [2024-07-10 12:35:27.029901] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:17.573 [2024-07-10 12:35:27.029916] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:17.573 [2024-07-10 12:35:27.029929] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:17.573 [2024-07-10 12:35:27.029942] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:17.573 [2024-07-10 12:35:27.029956] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:17.573 [2024-07-10 12:35:27.029968] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:17.573 [2024-07-10 12:35:27.029979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.573 [2024-07-10 12:35:27.029991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:17.573 [2024-07-10 12:35:27.030002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:33:17.573 [2024-07-10 12:35:27.030014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.573 [2024-07-10 12:35:27.030087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.573 [2024-07-10 12:35:27.030099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:17.573 [2024-07-10 12:35:27.030110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:17.573 [2024-07-10 12:35:27.030121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.573 [2024-07-10 12:35:27.030209] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:17.573 [2024-07-10 12:35:27.030227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:17.573 [2024-07-10 12:35:27.030247] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:17.573 [2024-07-10 12:35:27.030261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.573 [2024-07-10 12:35:27.030271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:17.573 [2024-07-10 12:35:27.030283] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:17.573 [2024-07-10 12:35:27.030292] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:17.573 [2024-07-10 12:35:27.030304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:17.573 [2024-07-10 12:35:27.030313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:17.573 [2024-07-10 12:35:27.030324] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:17.573 [2024-07-10 12:35:27.030334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:17.573 [2024-07-10 12:35:27.030347] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:17.573 [2024-07-10 12:35:27.030355] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:17.573 [2024-07-10 12:35:27.030369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:17.573 [2024-07-10 12:35:27.030378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:17.573 [2024-07-10 12:35:27.030390] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.573 [2024-07-10 12:35:27.030399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:17.573 [2024-07-10 12:35:27.030413] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:17.573 [2024-07-10 12:35:27.030423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.573 [2024-07-10 12:35:27.030436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:17.573 [2024-07-10 12:35:27.030445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:17.573 [2024-07-10 12:35:27.030456] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:17.573 [2024-07-10 12:35:27.030465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:17.573 [2024-07-10 12:35:27.030477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:17.573 [2024-07-10 12:35:27.030486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:17.573 [2024-07-10 12:35:27.030498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:17.573 [2024-07-10 12:35:27.030507] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:17.573 [2024-07-10 12:35:27.030518] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:17.573 [2024-07-10 12:35:27.030528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:17.573 [2024-07-10 12:35:27.030539] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:17.573 [2024-07-10 12:35:27.030548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:17.573 [2024-07-10 12:35:27.030560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:17.573 [2024-07-10 12:35:27.030569] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:17.573 [2024-07-10 12:35:27.030584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:17.573 [2024-07-10 12:35:27.030593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:17.573 [2024-07-10 12:35:27.030604] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:17.573 [2024-07-10 12:35:27.030613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:17.573 [2024-07-10 12:35:27.030627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:17.573 [2024-07-10 12:35:27.030636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:17.573 [2024-07-10 12:35:27.030647] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.573 [2024-07-10 12:35:27.030656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:17.573 [2024-07-10 12:35:27.030668] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:17.573 [2024-07-10 12:35:27.030677] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.573 [2024-07-10 12:35:27.030688] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:17.573 [2024-07-10 12:35:27.030698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:17.573 [2024-07-10 12:35:27.030712] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:17.573 [2024-07-10 12:35:27.030722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:17.573 [2024-07-10 12:35:27.030744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:17.573 [2024-07-10 12:35:27.030754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:17.573 [2024-07-10 12:35:27.030769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:17.573 [2024-07-10 12:35:27.030778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:17.573 [2024-07-10 12:35:27.030790] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:17.573 [2024-07-10 12:35:27.030800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:17.573 [2024-07-10 12:35:27.030816] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:17.573 [2024-07-10 12:35:27.030829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:17.573 [2024-07-10 12:35:27.030846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:17.573 [2024-07-10 12:35:27.030857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:17.573 [2024-07-10 12:35:27.031062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:17.573 [2024-07-10 12:35:27.031072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:17.573 [2024-07-10 12:35:27.031086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:17.573 [2024-07-10 12:35:27.031096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:17.573 [2024-07-10 12:35:27.031111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:17.573 [2024-07-10 12:35:27.031121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:17.573 [2024-07-10 12:35:27.031134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:17.573 [2024-07-10 12:35:27.031144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:17.573 [2024-07-10 12:35:27.031160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:17.573 [2024-07-10 12:35:27.031170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:17.573 [2024-07-10 12:35:27.031183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:17.573 [2024-07-10 12:35:27.031193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:17.573 [2024-07-10 12:35:27.031206] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:17.573 [2024-07-10 12:35:27.031218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:17.573 [2024-07-10 12:35:27.031232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:17.573 [2024-07-10 12:35:27.031242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:17.573 [2024-07-10 12:35:27.031257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:17.574 [2024-07-10 12:35:27.031268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:17.574 [2024-07-10 12:35:27.031282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:17.574 [2024-07-10 12:35:27.031293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:17.574 [2024-07-10 12:35:27.031308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:33:17.574 [2024-07-10 12:35:27.031318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:17.574 [2024-07-10 12:35:27.031368] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:33:17.574 [2024-07-10 12:35:27.031381] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:33:20.863 [2024-07-10 12:35:30.104923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.863 [2024-07-10 12:35:30.105000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:33:20.863 [2024-07-10 12:35:30.105022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3078.533 ms 00:33:20.863 [2024-07-10 12:35:30.105034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.863 [2024-07-10 12:35:30.155415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.863 [2024-07-10 12:35:30.155485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:20.863 [2024-07-10 12:35:30.155507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.173 ms 00:33:20.863 [2024-07-10 12:35:30.155518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.863 [2024-07-10 12:35:30.155750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.863 [2024-07-10 12:35:30.155771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:20.863 [2024-07-10 12:35:30.155787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:33:20.863 [2024-07-10 12:35:30.155802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.863 [2024-07-10 12:35:30.208561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.863 [2024-07-10 12:35:30.208624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:20.863 [2024-07-10 12:35:30.208643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.784 ms 00:33:20.863 [2024-07-10 12:35:30.208654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.863 [2024-07-10 12:35:30.208716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.863 [2024-07-10 12:35:30.208748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:20.863 [2024-07-10 12:35:30.208763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:20.863 [2024-07-10 12:35:30.208773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.863 [2024-07-10 12:35:30.209268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.863 [2024-07-10 12:35:30.209289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:20.863 [2024-07-10 12:35:30.209303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:33:20.863 [2024-07-10 12:35:30.209314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.863 [2024-07-10 12:35:30.209431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.863 [2024-07-10 12:35:30.209444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:20.863 [2024-07-10 12:35:30.209461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:33:20.863 [2024-07-10 12:35:30.209471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.863 [2024-07-10 12:35:30.229977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.863 [2024-07-10 12:35:30.230027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:20.863 [2024-07-10 12:35:30.230046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.511 ms 00:33:20.863 [2024-07-10 12:35:30.230058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:20.863 [2024-07-10 12:35:30.242924] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:20.863 [2024-07-10 12:35:30.246176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:20.863 [2024-07-10 12:35:30.246211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:20.864 [2024-07-10 12:35:30.246225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.032 ms 00:33:20.864 [2024-07-10 12:35:30.246254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.123 [2024-07-10 12:35:30.345569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.123 [2024-07-10 12:35:30.345650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:33:21.123 [2024-07-10 12:35:30.345670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.431 ms 00:33:21.123 [2024-07-10 12:35:30.345684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.123 [2024-07-10 12:35:30.345911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.123 [2024-07-10 12:35:30.345936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:21.123 [2024-07-10 12:35:30.345948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:33:21.123 [2024-07-10 12:35:30.345965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.123 [2024-07-10 12:35:30.384257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.123 [2024-07-10 12:35:30.384307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:33:21.123 [2024-07-10 12:35:30.384323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.299 ms 00:33:21.123 [2024-07-10 12:35:30.384336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.123 [2024-07-10 12:35:30.422110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.123 [2024-07-10 12:35:30.422156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:33:21.123 [2024-07-10 12:35:30.422173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.786 ms 00:33:21.123 [2024-07-10 12:35:30.422185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.123 [2024-07-10 12:35:30.422983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.123 [2024-07-10 12:35:30.423010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:21.123 [2024-07-10 12:35:30.423023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:33:21.123 [2024-07-10 12:35:30.423041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.123 [2024-07-10 12:35:30.533921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.123 [2024-07-10 12:35:30.533996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:33:21.123 [2024-07-10 12:35:30.534015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.004 ms 00:33:21.123 [2024-07-10 12:35:30.534034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.123 [2024-07-10 12:35:30.575779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.123 [2024-07-10 12:35:30.575864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:33:21.123 [2024-07-10 12:35:30.575884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.761 ms 00:33:21.123 [2024-07-10 12:35:30.575912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.384 [2024-07-10 12:35:30.617154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.384 [2024-07-10 12:35:30.617239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:33:21.384 [2024-07-10 12:35:30.617257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.229 ms 00:33:21.384 [2024-07-10 12:35:30.617271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.384 [2024-07-10 12:35:30.658878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.384 [2024-07-10 12:35:30.658945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:21.384 [2024-07-10 12:35:30.658962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.617 ms 00:33:21.384 [2024-07-10 12:35:30.658975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.384 [2024-07-10 12:35:30.659232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.384 [2024-07-10 12:35:30.659250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:21.384 [2024-07-10 12:35:30.659262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:33:21.384 [2024-07-10 12:35:30.659280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.384 [2024-07-10 12:35:30.659388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.384 [2024-07-10 12:35:30.659404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:21.384 [2024-07-10 12:35:30.659419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:21.384 [2024-07-10 12:35:30.659432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.384 [2024-07-10 12:35:30.660786] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3659.718 ms, result 0 00:33:21.384 { 00:33:21.384 "name": "ftl0", 00:33:21.384 "uuid": "af10987c-0e85-4376-ab8d-dcd11810374e" 00:33:21.384 } 00:33:21.384 12:35:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:33:21.384 12:35:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:33:21.643 12:35:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:33:21.643 12:35:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:33:21.643 12:35:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:33:21.643 /dev/nbd0 00:33:21.643 12:35:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:33:21.643 12:35:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:21.643 12:35:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:33:21.643 12:35:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:21.643 12:35:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:21.643 12:35:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:21.643 12:35:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:33:21.643 12:35:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:21.643 12:35:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:21.643 12:35:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:33:21.643 1+0 records in 00:33:21.643 1+0 records out 00:33:21.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105177 s, 3.9 MB/s 00:33:21.643 12:35:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:33:21.643 12:35:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:33:21.643 12:35:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:33:21.901 12:35:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:21.901 12:35:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:33:21.901 12:35:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:33:21.901 [2024-07-10 12:35:31.214052] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:33:21.901 [2024-07-10 12:35:31.214180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83963 ] 00:33:22.159 [2024-07-10 12:35:31.385898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.417 [2024-07-10 12:35:31.647556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.345  Copying: 206/1024 [MB] (206 MBps) Copying: 412/1024 [MB] (206 MBps) Copying: 619/1024 [MB] (206 MBps) Copying: 823/1024 [MB] (204 MBps) Copying: 1024/1024 [MB] (average 205 MBps) 00:33:29.345 00:33:29.345 12:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:31.245 12:35:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:33:31.245 [2024-07-10 12:35:40.280171] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:33:31.245 [2024-07-10 12:35:40.280309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84057 ] 00:33:31.245 [2024-07-10 12:35:40.452825] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.245 [2024-07-10 12:35:40.721766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.386  Copying: 17/1024 [MB] (17 MBps) Copying: 35/1024 [MB] (17 MBps) Copying: 53/1024 [MB] (18 MBps) Copying: 72/1024 [MB] (18 MBps) Copying: 90/1024 [MB] (18 MBps) Copying: 108/1024 [MB] (17 MBps) Copying: 126/1024 [MB] (18 MBps) Copying: 145/1024 [MB] (18 MBps) Copying: 163/1024 [MB] (18 MBps) Copying: 181/1024 [MB] (18 MBps) Copying: 198/1024 [MB] (17 MBps) Copying: 217/1024 [MB] (18 MBps) Copying: 235/1024 [MB] (18 MBps) Copying: 253/1024 [MB] (18 MBps) Copying: 271/1024 [MB] (18 MBps) Copying: 290/1024 [MB] (18 MBps) Copying: 309/1024 [MB] (18 MBps) Copying: 327/1024 [MB] (18 MBps) Copying: 346/1024 [MB] (18 MBps) Copying: 365/1024 [MB] (18 MBps) Copying: 384/1024 [MB] (19 MBps) Copying: 403/1024 [MB] (18 MBps) Copying: 422/1024 [MB] (18 MBps) Copying: 440/1024 [MB] (18 MBps) Copying: 458/1024 [MB] (17 MBps) Copying: 476/1024 [MB] (17 MBps) Copying: 493/1024 [MB] (17 MBps) Copying: 510/1024 [MB] (17 MBps) Copying: 528/1024 [MB] (18 MBps) Copying: 546/1024 [MB] (17 MBps) Copying: 564/1024 [MB] (18 MBps) Copying: 583/1024 [MB] (19 MBps) Copying: 601/1024 [MB] (17 MBps) Copying: 619/1024 [MB] (17 MBps) Copying: 637/1024 [MB] (17 MBps) Copying: 654/1024 [MB] (17 MBps) Copying: 671/1024 [MB] (17 MBps) Copying: 689/1024 [MB] (17 MBps) Copying: 707/1024 [MB] (18 MBps) Copying: 725/1024 [MB] (17 MBps) Copying: 742/1024 [MB] (17 MBps) Copying: 758/1024 [MB] (16 MBps) Copying: 776/1024 [MB] (17 MBps) Copying: 794/1024 [MB] (17 MBps) Copying: 812/1024 [MB] (18 MBps) Copying: 829/1024 [MB] (17 MBps) Copying: 846/1024 [MB] (16 MBps) Copying: 863/1024 [MB] (17 MBps) Copying: 882/1024 [MB] (18 MBps) Copying: 899/1024 [MB] (17 MBps) Copying: 916/1024 [MB] (16 MBps) Copying: 932/1024 [MB] (16 MBps) Copying: 949/1024 [MB] (16 MBps) Copying: 967/1024 [MB] (17 MBps) Copying: 985/1024 [MB] (17 MBps) Copying: 1002/1024 [MB] (17 MBps) Copying: 1019/1024 [MB] (17 MBps) Copying: 1024/1024 [MB] (average 17 MBps) 00:34:30.386 00:34:30.386 12:36:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:34:30.386 12:36:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:34:30.644 12:36:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:34:30.903 [2024-07-10 12:36:40.131099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.903 [2024-07-10 12:36:40.131173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:30.903 [2024-07-10 12:36:40.131206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:30.903 [2024-07-10 12:36:40.131218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.903 [2024-07-10 12:36:40.131257] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:30.903 [2024-07-10 12:36:40.135291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.903 [2024-07-10 12:36:40.135342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:30.903 [2024-07-10 12:36:40.135360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.019 ms 00:34:30.903 [2024-07-10 12:36:40.135377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.903 [2024-07-10 12:36:40.137443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.903 [2024-07-10 12:36:40.137499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:30.903 [2024-07-10 12:36:40.137513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.028 ms 00:34:30.903 [2024-07-10 12:36:40.137526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.903 [2024-07-10 12:36:40.151994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.903 [2024-07-10 12:36:40.152082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:30.903 [2024-07-10 12:36:40.152125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.460 ms 00:34:30.903 [2024-07-10 12:36:40.152140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.903 [2024-07-10 12:36:40.157408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.903 [2024-07-10 12:36:40.157467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:30.903 [2024-07-10 12:36:40.157483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.227 ms 00:34:30.903 [2024-07-10 12:36:40.157496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.903 [2024-07-10 12:36:40.200895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.903 [2024-07-10 12:36:40.200980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:30.903 [2024-07-10 12:36:40.201000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.365 ms 00:34:30.903 [2024-07-10 12:36:40.201031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.903 [2024-07-10 12:36:40.227152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.903 [2024-07-10 12:36:40.227250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:30.903 [2024-07-10 12:36:40.227273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.078 ms 00:34:30.903 [2024-07-10 12:36:40.227288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.903 [2024-07-10 12:36:40.227559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.903 [2024-07-10 12:36:40.227583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:30.903 [2024-07-10 12:36:40.227596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:34:30.903 [2024-07-10 12:36:40.227609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.903 [2024-07-10 12:36:40.271690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.903 [2024-07-10 12:36:40.271772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:34:30.903 [2024-07-10 12:36:40.271791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.128 ms 00:34:30.903 [2024-07-10 12:36:40.271805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.903 [2024-07-10 12:36:40.314118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.903 [2024-07-10 12:36:40.314207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:34:30.903 [2024-07-10 12:36:40.314226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.295 ms 00:34:30.904 [2024-07-10 12:36:40.314240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.904 [2024-07-10 12:36:40.358196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.904 [2024-07-10 12:36:40.358296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:30.904 [2024-07-10 12:36:40.358316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.935 ms 00:34:30.904 [2024-07-10 12:36:40.358329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.164 [2024-07-10 12:36:40.401791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:31.164 [2024-07-10 12:36:40.401887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:31.164 [2024-07-10 12:36:40.401906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.347 ms 00:34:31.164 [2024-07-10 12:36:40.401921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.164 [2024-07-10 12:36:40.402007] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:31.164 [2024-07-10 12:36:40.402032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.402994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.403009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.403020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.403033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.403044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.403062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.403073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.403087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.403098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.403110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.403121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:31.164 [2024-07-10 12:36:40.403135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:31.165 [2024-07-10 12:36:40.403437] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:31.165 [2024-07-10 12:36:40.403448] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af10987c-0e85-4376-ab8d-dcd11810374e 00:34:31.165 [2024-07-10 12:36:40.403463] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:31.165 [2024-07-10 12:36:40.403474] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:31.165 [2024-07-10 12:36:40.403497] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:31.165 [2024-07-10 12:36:40.403508] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:31.165 [2024-07-10 12:36:40.403530] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:31.165 [2024-07-10 12:36:40.403541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:31.165 [2024-07-10 12:36:40.403555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:31.165 [2024-07-10 12:36:40.403564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:31.165 [2024-07-10 12:36:40.403576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:31.165 [2024-07-10 12:36:40.403587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:31.165 [2024-07-10 12:36:40.403601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:31.165 [2024-07-10 12:36:40.403613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.586 ms 00:34:31.165 [2024-07-10 12:36:40.403626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.165 [2024-07-10 12:36:40.424951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:31.165 [2024-07-10 12:36:40.425029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:31.165 [2024-07-10 12:36:40.425048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.264 ms 00:34:31.165 [2024-07-10 12:36:40.425061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.165 [2024-07-10 12:36:40.425643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:31.165 [2024-07-10 12:36:40.425666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:31.165 [2024-07-10 12:36:40.425677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:34:31.165 [2024-07-10 12:36:40.425691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.165 [2024-07-10 12:36:40.490314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.165 [2024-07-10 12:36:40.490394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:31.165 [2024-07-10 12:36:40.490413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.165 [2024-07-10 12:36:40.490427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.165 [2024-07-10 12:36:40.490516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.165 [2024-07-10 12:36:40.490530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:31.165 [2024-07-10 12:36:40.490541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.165 [2024-07-10 12:36:40.490555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.165 [2024-07-10 12:36:40.490674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.165 [2024-07-10 12:36:40.490697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:31.165 [2024-07-10 12:36:40.490708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.165 [2024-07-10 12:36:40.490721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.165 [2024-07-10 12:36:40.490772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.165 [2024-07-10 12:36:40.490790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:31.165 [2024-07-10 12:36:40.490801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.165 [2024-07-10 12:36:40.490814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.165 [2024-07-10 12:36:40.621292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.165 [2024-07-10 12:36:40.621372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:31.165 [2024-07-10 12:36:40.621390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.165 [2024-07-10 12:36:40.621415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.424 [2024-07-10 12:36:40.738154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.424 [2024-07-10 12:36:40.738234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:31.424 [2024-07-10 12:36:40.738252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.424 [2024-07-10 12:36:40.738267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.424 [2024-07-10 12:36:40.738388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.424 [2024-07-10 12:36:40.738406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:31.424 [2024-07-10 12:36:40.738423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.424 [2024-07-10 12:36:40.738436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.424 [2024-07-10 12:36:40.738490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.424 [2024-07-10 12:36:40.738510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:31.425 [2024-07-10 12:36:40.738522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.425 [2024-07-10 12:36:40.738536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.425 [2024-07-10 12:36:40.738657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.425 [2024-07-10 12:36:40.738675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:31.425 [2024-07-10 12:36:40.738687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.425 [2024-07-10 12:36:40.738703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.425 [2024-07-10 12:36:40.738783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.425 [2024-07-10 12:36:40.738800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:31.425 [2024-07-10 12:36:40.738811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.425 [2024-07-10 12:36:40.738826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.425 [2024-07-10 12:36:40.738878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.425 [2024-07-10 12:36:40.738893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:31.425 [2024-07-10 12:36:40.738904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.425 [2024-07-10 12:36:40.738920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.425 [2024-07-10 12:36:40.738972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.425 [2024-07-10 12:36:40.738991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:31.425 [2024-07-10 12:36:40.739007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.425 [2024-07-10 12:36:40.739025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.425 [2024-07-10 12:36:40.739200] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 609.052 ms, result 0 00:34:31.425 true 00:34:31.425 12:36:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 83822 00:34:31.425 12:36:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid83822 00:34:31.425 12:36:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:34:31.425 [2024-07-10 12:36:40.882755] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:34:31.425 [2024-07-10 12:36:40.882980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84673 ] 00:34:31.683 [2024-07-10 12:36:41.073205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:31.941 [2024-07-10 12:36:41.331708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.872  Copying: 192/1024 [MB] (192 MBps) Copying: 388/1024 [MB] (196 MBps) Copying: 583/1024 [MB] (195 MBps) Copying: 783/1024 [MB] (199 MBps) Copying: 983/1024 [MB] (200 MBps) Copying: 1024/1024 [MB] (average 197 MBps) 00:34:38.872 00:34:38.872 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 83822 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:34:38.872 12:36:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:39.130 [2024-07-10 12:36:48.396812] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:34:39.130 [2024-07-10 12:36:48.396953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84748 ] 00:34:39.130 [2024-07-10 12:36:48.566831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:39.388 [2024-07-10 12:36:48.823161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:39.954 [2024-07-10 12:36:49.231252] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:39.954 [2024-07-10 12:36:49.231344] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:39.954 [2024-07-10 12:36:49.298288] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:39.954 [2024-07-10 12:36:49.298664] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:39.954 [2024-07-10 12:36:49.298906] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:40.213 [2024-07-10 12:36:49.555188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.213 [2024-07-10 12:36:49.555264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:40.213 [2024-07-10 12:36:49.555281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:40.213 [2024-07-10 12:36:49.555292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.213 [2024-07-10 12:36:49.555359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.213 [2024-07-10 12:36:49.555374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:40.213 [2024-07-10 12:36:49.555385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:34:40.213 [2024-07-10 12:36:49.555400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.213 [2024-07-10 12:36:49.555424] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:40.213 [2024-07-10 12:36:49.556599] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:40.213 [2024-07-10 12:36:49.556628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.213 [2024-07-10 12:36:49.556640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:40.213 [2024-07-10 12:36:49.556651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:34:40.213 [2024-07-10 12:36:49.556661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.213 [2024-07-10 12:36:49.558432] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:40.213 [2024-07-10 12:36:49.578694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.213 [2024-07-10 12:36:49.578761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:40.213 [2024-07-10 12:36:49.578779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.295 ms 00:34:40.213 [2024-07-10 12:36:49.578796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.213 [2024-07-10 12:36:49.578868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.213 [2024-07-10 12:36:49.578881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:40.213 [2024-07-10 12:36:49.578893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:34:40.213 [2024-07-10 12:36:49.578903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.213 [2024-07-10 12:36:49.585823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.213 [2024-07-10 12:36:49.585956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:40.213 [2024-07-10 12:36:49.586095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.858 ms 00:34:40.213 [2024-07-10 12:36:49.586132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.213 [2024-07-10 12:36:49.586257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.213 [2024-07-10 12:36:49.586342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:40.213 [2024-07-10 12:36:49.586379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:34:40.213 [2024-07-10 12:36:49.586408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.214 [2024-07-10 12:36:49.586528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.214 [2024-07-10 12:36:49.586568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:40.214 [2024-07-10 12:36:49.586599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:34:40.214 [2024-07-10 12:36:49.586632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.214 [2024-07-10 12:36:49.586929] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:40.214 [2024-07-10 12:36:49.592494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.214 [2024-07-10 12:36:49.592626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:40.214 [2024-07-10 12:36:49.592770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.582 ms 00:34:40.214 [2024-07-10 12:36:49.592788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.214 [2024-07-10 12:36:49.592834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.214 [2024-07-10 12:36:49.592846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:40.214 [2024-07-10 12:36:49.592857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:40.214 [2024-07-10 12:36:49.592868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.214 [2024-07-10 12:36:49.592920] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:40.214 [2024-07-10 12:36:49.592948] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:40.214 [2024-07-10 12:36:49.592989] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:40.214 [2024-07-10 12:36:49.593007] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:34:40.214 [2024-07-10 12:36:49.593092] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:40.214 [2024-07-10 12:36:49.593105] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:40.214 [2024-07-10 12:36:49.593120] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:34:40.214 [2024-07-10 12:36:49.593133] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:40.214 [2024-07-10 12:36:49.593146] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:40.214 [2024-07-10 12:36:49.593158] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:40.214 [2024-07-10 12:36:49.593172] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:40.214 [2024-07-10 12:36:49.593182] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:40.214 [2024-07-10 12:36:49.593192] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:40.214 [2024-07-10 12:36:49.593202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.214 [2024-07-10 12:36:49.593212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:40.214 [2024-07-10 12:36:49.593223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:34:40.214 [2024-07-10 12:36:49.593233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.214 [2024-07-10 12:36:49.593300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.214 [2024-07-10 12:36:49.593311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:40.214 [2024-07-10 12:36:49.593333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:34:40.214 [2024-07-10 12:36:49.593343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.214 [2024-07-10 12:36:49.593432] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:40.214 [2024-07-10 12:36:49.593445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:40.214 [2024-07-10 12:36:49.593455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:40.214 [2024-07-10 12:36:49.593466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:40.214 [2024-07-10 12:36:49.593476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:40.214 [2024-07-10 12:36:49.593486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:40.214 [2024-07-10 12:36:49.593499] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:40.214 [2024-07-10 12:36:49.593509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:40.214 [2024-07-10 12:36:49.593518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:40.214 [2024-07-10 12:36:49.593529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:40.214 [2024-07-10 12:36:49.593538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:40.214 [2024-07-10 12:36:49.593548] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:40.214 [2024-07-10 12:36:49.593557] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:40.214 [2024-07-10 12:36:49.593567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:40.214 [2024-07-10 12:36:49.593577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:40.214 [2024-07-10 12:36:49.593587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:40.214 [2024-07-10 12:36:49.593609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:40.214 [2024-07-10 12:36:49.593618] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:40.214 [2024-07-10 12:36:49.593628] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:40.214 [2024-07-10 12:36:49.593637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:40.214 [2024-07-10 12:36:49.593646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:40.214 [2024-07-10 12:36:49.593656] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:40.214 [2024-07-10 12:36:49.593665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:40.214 [2024-07-10 12:36:49.593674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:40.214 [2024-07-10 12:36:49.593683] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:40.214 [2024-07-10 12:36:49.593693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:40.214 [2024-07-10 12:36:49.593702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:40.214 [2024-07-10 12:36:49.593712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:40.214 [2024-07-10 12:36:49.593721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:40.214 [2024-07-10 12:36:49.593749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:40.214 [2024-07-10 12:36:49.593759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:40.214 [2024-07-10 12:36:49.593769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:40.214 [2024-07-10 12:36:49.593780] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:40.214 [2024-07-10 12:36:49.593791] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:40.214 [2024-07-10 12:36:49.593800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:40.214 [2024-07-10 12:36:49.593812] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:40.214 [2024-07-10 12:36:49.593821] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:40.214 [2024-07-10 12:36:49.593831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:40.214 [2024-07-10 12:36:49.593841] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:40.214 [2024-07-10 12:36:49.593850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:40.214 [2024-07-10 12:36:49.593859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:40.214 [2024-07-10 12:36:49.593874] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:40.214 [2024-07-10 12:36:49.593884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:40.214 [2024-07-10 12:36:49.593893] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:40.214 [2024-07-10 12:36:49.593904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:40.214 [2024-07-10 12:36:49.593913] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:40.214 [2024-07-10 12:36:49.593923] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:40.214 [2024-07-10 12:36:49.593934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:40.214 [2024-07-10 12:36:49.593943] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:40.214 [2024-07-10 12:36:49.593953] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:40.214 [2024-07-10 12:36:49.593962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:40.214 [2024-07-10 12:36:49.593971] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:40.214 [2024-07-10 12:36:49.593980] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:40.214 [2024-07-10 12:36:49.593991] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:40.214 [2024-07-10 12:36:49.594009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:40.214 [2024-07-10 12:36:49.594022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:40.214 [2024-07-10 12:36:49.594032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:40.214 [2024-07-10 12:36:49.594043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:40.214 [2024-07-10 12:36:49.594053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:40.214 [2024-07-10 12:36:49.594063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:40.214 [2024-07-10 12:36:49.594074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:40.214 [2024-07-10 12:36:49.594084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:40.214 [2024-07-10 12:36:49.594094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:40.214 [2024-07-10 12:36:49.594104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:40.214 [2024-07-10 12:36:49.594115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:40.214 [2024-07-10 12:36:49.594126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:40.214 [2024-07-10 12:36:49.594137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:40.214 [2024-07-10 12:36:49.594147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:40.214 [2024-07-10 12:36:49.594157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:40.214 [2024-07-10 12:36:49.594167] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:40.214 [2024-07-10 12:36:49.594178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:40.214 [2024-07-10 12:36:49.594190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:40.215 [2024-07-10 12:36:49.594200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:40.215 [2024-07-10 12:36:49.594210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:40.215 [2024-07-10 12:36:49.594221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:40.215 [2024-07-10 12:36:49.594231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.215 [2024-07-10 12:36:49.594242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:40.215 [2024-07-10 12:36:49.594252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:34:40.215 [2024-07-10 12:36:49.594263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.215 [2024-07-10 12:36:49.650611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.215 [2024-07-10 12:36:49.650671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:40.215 [2024-07-10 12:36:49.650688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.387 ms 00:34:40.215 [2024-07-10 12:36:49.650700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.215 [2024-07-10 12:36:49.650815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.215 [2024-07-10 12:36:49.650828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:40.215 [2024-07-10 12:36:49.650839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:34:40.215 [2024-07-10 12:36:49.650875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.473 [2024-07-10 12:36:49.703308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.473 [2024-07-10 12:36:49.703368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:40.473 [2024-07-10 12:36:49.703385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.439 ms 00:34:40.473 [2024-07-10 12:36:49.703396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.473 [2024-07-10 12:36:49.703457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.473 [2024-07-10 12:36:49.703474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:40.473 [2024-07-10 12:36:49.703485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:40.473 [2024-07-10 12:36:49.703495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.473 [2024-07-10 12:36:49.704001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.473 [2024-07-10 12:36:49.704016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:40.473 [2024-07-10 12:36:49.704028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:34:40.473 [2024-07-10 12:36:49.704038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.473 [2024-07-10 12:36:49.704178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.473 [2024-07-10 12:36:49.704191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:40.473 [2024-07-10 12:36:49.704207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:34:40.473 [2024-07-10 12:36:49.704218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.473 [2024-07-10 12:36:49.725586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.473 [2024-07-10 12:36:49.725639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:40.473 [2024-07-10 12:36:49.725655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.379 ms 00:34:40.474 [2024-07-10 12:36:49.725666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.474 [2024-07-10 12:36:49.746499] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:40.474 [2024-07-10 12:36:49.746544] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:40.474 [2024-07-10 12:36:49.746561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.474 [2024-07-10 12:36:49.746572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:40.474 [2024-07-10 12:36:49.746585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.757 ms 00:34:40.474 [2024-07-10 12:36:49.746595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.474 [2024-07-10 12:36:49.777534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.474 [2024-07-10 12:36:49.777584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:40.474 [2024-07-10 12:36:49.777602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.942 ms 00:34:40.474 [2024-07-10 12:36:49.777613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.474 [2024-07-10 12:36:49.797968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.474 [2024-07-10 12:36:49.798011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:40.474 [2024-07-10 12:36:49.798025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.328 ms 00:34:40.474 [2024-07-10 12:36:49.798035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.474 [2024-07-10 12:36:49.818256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.474 [2024-07-10 12:36:49.818298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:40.474 [2024-07-10 12:36:49.818314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.210 ms 00:34:40.474 [2024-07-10 12:36:49.818325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.474 [2024-07-10 12:36:49.819193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.474 [2024-07-10 12:36:49.819222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:40.474 [2024-07-10 12:36:49.819239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:34:40.474 [2024-07-10 12:36:49.819249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.474 [2024-07-10 12:36:49.911214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.474 [2024-07-10 12:36:49.911295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:40.474 [2024-07-10 12:36:49.911313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.085 ms 00:34:40.474 [2024-07-10 12:36:49.911325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.474 [2024-07-10 12:36:49.923307] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:40.474 [2024-07-10 12:36:49.926467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.474 [2024-07-10 12:36:49.926499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:40.474 [2024-07-10 12:36:49.926515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.103 ms 00:34:40.474 [2024-07-10 12:36:49.926526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.474 [2024-07-10 12:36:49.926636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.474 [2024-07-10 12:36:49.926650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:40.474 [2024-07-10 12:36:49.926665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:40.474 [2024-07-10 12:36:49.926676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.474 [2024-07-10 12:36:49.926767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.474 [2024-07-10 12:36:49.926781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:40.474 [2024-07-10 12:36:49.926792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:34:40.474 [2024-07-10 12:36:49.926802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.474 [2024-07-10 12:36:49.926827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.474 [2024-07-10 12:36:49.926838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:40.474 [2024-07-10 12:36:49.926849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:40.474 [2024-07-10 12:36:49.926864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.474 [2024-07-10 12:36:49.926900] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:40.474 [2024-07-10 12:36:49.926913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.474 [2024-07-10 12:36:49.926923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:40.474 [2024-07-10 12:36:49.926933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:40.474 [2024-07-10 12:36:49.926944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.733 [2024-07-10 12:36:49.965119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.733 [2024-07-10 12:36:49.965162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:40.733 [2024-07-10 12:36:49.965184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.217 ms 00:34:40.733 [2024-07-10 12:36:49.965199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.733 [2024-07-10 12:36:49.965276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:40.733 [2024-07-10 12:36:49.965288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:40.733 [2024-07-10 12:36:49.965300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:34:40.733 [2024-07-10 12:36:49.965311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:40.733 [2024-07-10 12:36:49.966477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 411.424 ms, result 0 00:35:16.382  Copying: 28/1024 [MB] (28 MBps) Copying: 56/1024 [MB] (28 MBps) Copying: 85/1024 [MB] (28 MBps) Copying: 113/1024 [MB] (28 MBps) Copying: 141/1024 [MB] (27 MBps) Copying: 171/1024 [MB] (30 MBps) Copying: 202/1024 [MB] (31 MBps) Copying: 233/1024 [MB] (30 MBps) Copying: 262/1024 [MB] (29 MBps) Copying: 291/1024 [MB] (28 MBps) Copying: 322/1024 [MB] (31 MBps) Copying: 355/1024 [MB] (33 MBps) Copying: 385/1024 [MB] (29 MBps) Copying: 412/1024 [MB] (27 MBps) Copying: 441/1024 [MB] (28 MBps) Copying: 469/1024 [MB] (28 MBps) Copying: 499/1024 [MB] (29 MBps) Copying: 528/1024 [MB] (28 MBps) Copying: 556/1024 [MB] (28 MBps) Copying: 585/1024 [MB] (28 MBps) Copying: 615/1024 [MB] (29 MBps) Copying: 642/1024 [MB] (27 MBps) Copying: 671/1024 [MB] (28 MBps) Copying: 702/1024 [MB] (30 MBps) Copying: 733/1024 [MB] (30 MBps) Copying: 763/1024 [MB] (30 MBps) Copying: 791/1024 [MB] (28 MBps) Copying: 819/1024 [MB] (27 MBps) Copying: 847/1024 [MB] (28 MBps) Copying: 877/1024 [MB] (29 MBps) Copying: 907/1024 [MB] (29 MBps) Copying: 936/1024 [MB] (29 MBps) Copying: 966/1024 [MB] (29 MBps) Copying: 995/1024 [MB] (28 MBps) Copying: 1023/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-10 12:37:25.653251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.382 [2024-07-10 12:37:25.653332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:16.382 [2024-07-10 12:37:25.653355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:16.382 [2024-07-10 12:37:25.653377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.382 [2024-07-10 12:37:25.654066] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:16.382 [2024-07-10 12:37:25.658782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.382 [2024-07-10 12:37:25.658825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:16.382 [2024-07-10 12:37:25.658841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.685 ms 00:35:16.382 [2024-07-10 12:37:25.658851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.382 [2024-07-10 12:37:25.668378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.382 [2024-07-10 12:37:25.668421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:16.382 [2024-07-10 12:37:25.668445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.501 ms 00:35:16.382 [2024-07-10 12:37:25.668455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.382 [2024-07-10 12:37:25.694329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.382 [2024-07-10 12:37:25.694404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:16.382 [2024-07-10 12:37:25.694422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.893 ms 00:35:16.382 [2024-07-10 12:37:25.694433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.382 [2024-07-10 12:37:25.699415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.382 [2024-07-10 12:37:25.699450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:16.382 [2024-07-10 12:37:25.699463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.954 ms 00:35:16.382 [2024-07-10 12:37:25.699482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.382 [2024-07-10 12:37:25.736431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.382 [2024-07-10 12:37:25.736475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:16.382 [2024-07-10 12:37:25.736492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.942 ms 00:35:16.382 [2024-07-10 12:37:25.736502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.382 [2024-07-10 12:37:25.757548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.382 [2024-07-10 12:37:25.757589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:16.382 [2024-07-10 12:37:25.757605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.040 ms 00:35:16.382 [2024-07-10 12:37:25.757616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.640 [2024-07-10 12:37:25.866090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.640 [2024-07-10 12:37:25.866181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:16.640 [2024-07-10 12:37:25.866199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.600 ms 00:35:16.640 [2024-07-10 12:37:25.866211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.640 [2024-07-10 12:37:25.904115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.640 [2024-07-10 12:37:25.904160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:35:16.640 [2024-07-10 12:37:25.904175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.946 ms 00:35:16.640 [2024-07-10 12:37:25.904185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.640 [2024-07-10 12:37:25.943803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.640 [2024-07-10 12:37:25.943843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:35:16.640 [2024-07-10 12:37:25.943857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.641 ms 00:35:16.640 [2024-07-10 12:37:25.943868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.640 [2024-07-10 12:37:25.982898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.640 [2024-07-10 12:37:25.982933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:16.640 [2024-07-10 12:37:25.982947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.056 ms 00:35:16.641 [2024-07-10 12:37:25.982958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.641 [2024-07-10 12:37:26.021037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.641 [2024-07-10 12:37:26.021077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:16.641 [2024-07-10 12:37:26.021091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.065 ms 00:35:16.641 [2024-07-10 12:37:26.021101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.641 [2024-07-10 12:37:26.021139] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:16.641 [2024-07-10 12:37:26.021156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103168 / 261120 wr_cnt: 1 state: open 00:35:16.641 [2024-07-10 12:37:26.021170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.021990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:16.641 [2024-07-10 12:37:26.022194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:16.642 [2024-07-10 12:37:26.022204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:16.642 [2024-07-10 12:37:26.022214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:16.642 [2024-07-10 12:37:26.022229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:16.642 [2024-07-10 12:37:26.022241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:16.642 [2024-07-10 12:37:26.022252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:16.642 [2024-07-10 12:37:26.022265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:16.642 [2024-07-10 12:37:26.022276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:16.642 [2024-07-10 12:37:26.022287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:16.642 [2024-07-10 12:37:26.022298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:16.642 [2024-07-10 12:37:26.022316] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:16.642 [2024-07-10 12:37:26.022327] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af10987c-0e85-4376-ab8d-dcd11810374e 00:35:16.642 [2024-07-10 12:37:26.022338] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103168 00:35:16.642 [2024-07-10 12:37:26.022347] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104128 00:35:16.642 [2024-07-10 12:37:26.022362] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103168 00:35:16.642 [2024-07-10 12:37:26.022377] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:35:16.642 [2024-07-10 12:37:26.022387] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:16.642 [2024-07-10 12:37:26.022397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:16.642 [2024-07-10 12:37:26.022407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:16.642 [2024-07-10 12:37:26.022416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:16.642 [2024-07-10 12:37:26.022425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:16.642 [2024-07-10 12:37:26.022435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.642 [2024-07-10 12:37:26.022445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:16.642 [2024-07-10 12:37:26.022467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.300 ms 00:35:16.642 [2024-07-10 12:37:26.022477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.642 [2024-07-10 12:37:26.043181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.642 [2024-07-10 12:37:26.043215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:16.642 [2024-07-10 12:37:26.043234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.704 ms 00:35:16.642 [2024-07-10 12:37:26.043244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.642 [2024-07-10 12:37:26.043749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:16.642 [2024-07-10 12:37:26.043763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:16.642 [2024-07-10 12:37:26.043775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:35:16.642 [2024-07-10 12:37:26.043786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.642 [2024-07-10 12:37:26.089948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.642 [2024-07-10 12:37:26.089985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:16.642 [2024-07-10 12:37:26.090000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.642 [2024-07-10 12:37:26.090011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.642 [2024-07-10 12:37:26.090074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.642 [2024-07-10 12:37:26.090085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:16.642 [2024-07-10 12:37:26.090097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.642 [2024-07-10 12:37:26.090107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.642 [2024-07-10 12:37:26.090171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.642 [2024-07-10 12:37:26.090191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:16.642 [2024-07-10 12:37:26.090202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.642 [2024-07-10 12:37:26.090212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.642 [2024-07-10 12:37:26.090230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.642 [2024-07-10 12:37:26.090241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:16.642 [2024-07-10 12:37:26.090252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.642 [2024-07-10 12:37:26.090262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.900 [2024-07-10 12:37:26.208156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.900 [2024-07-10 12:37:26.208236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:16.900 [2024-07-10 12:37:26.208253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.900 [2024-07-10 12:37:26.208264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.900 [2024-07-10 12:37:26.309400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.900 [2024-07-10 12:37:26.309461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:16.900 [2024-07-10 12:37:26.309477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.900 [2024-07-10 12:37:26.309487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.900 [2024-07-10 12:37:26.309561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.900 [2024-07-10 12:37:26.309574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:16.900 [2024-07-10 12:37:26.309592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.900 [2024-07-10 12:37:26.309602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.900 [2024-07-10 12:37:26.309641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.900 [2024-07-10 12:37:26.309652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:16.900 [2024-07-10 12:37:26.309663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.900 [2024-07-10 12:37:26.309675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.900 [2024-07-10 12:37:26.309809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.900 [2024-07-10 12:37:26.309823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:16.900 [2024-07-10 12:37:26.309834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.900 [2024-07-10 12:37:26.309849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.900 [2024-07-10 12:37:26.309888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.900 [2024-07-10 12:37:26.309900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:16.900 [2024-07-10 12:37:26.309910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.900 [2024-07-10 12:37:26.309920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.900 [2024-07-10 12:37:26.309962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.900 [2024-07-10 12:37:26.309974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:16.900 [2024-07-10 12:37:26.309985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.900 [2024-07-10 12:37:26.309999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.900 [2024-07-10 12:37:26.310046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:16.900 [2024-07-10 12:37:26.310058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:16.900 [2024-07-10 12:37:26.310069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:16.900 [2024-07-10 12:37:26.310079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:16.900 [2024-07-10 12:37:26.310211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 658.351 ms, result 0 00:35:19.446 00:35:19.446 00:35:19.446 12:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:35:20.821 12:37:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:20.821 [2024-07-10 12:37:30.166790] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:35:20.821 [2024-07-10 12:37:30.166922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85164 ] 00:35:21.079 [2024-07-10 12:37:30.338068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.338 [2024-07-10 12:37:30.585163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.597 [2024-07-10 12:37:31.001148] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:21.597 [2024-07-10 12:37:31.001223] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:21.856 [2024-07-10 12:37:31.166385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.856 [2024-07-10 12:37:31.166449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:21.856 [2024-07-10 12:37:31.166467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:21.856 [2024-07-10 12:37:31.166480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.856 [2024-07-10 12:37:31.166550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.856 [2024-07-10 12:37:31.166564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:21.856 [2024-07-10 12:37:31.166575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:35:21.856 [2024-07-10 12:37:31.166590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.856 [2024-07-10 12:37:31.166618] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:21.856 [2024-07-10 12:37:31.167809] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:21.856 [2024-07-10 12:37:31.167836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.856 [2024-07-10 12:37:31.167852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:21.856 [2024-07-10 12:37:31.167864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:35:21.856 [2024-07-10 12:37:31.167874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.856 [2024-07-10 12:37:31.169791] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:21.856 [2024-07-10 12:37:31.190276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.856 [2024-07-10 12:37:31.190315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:21.856 [2024-07-10 12:37:31.190330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.519 ms 00:35:21.856 [2024-07-10 12:37:31.190356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.856 [2024-07-10 12:37:31.190424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.856 [2024-07-10 12:37:31.190437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:21.856 [2024-07-10 12:37:31.190453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:35:21.856 [2024-07-10 12:37:31.190463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.856 [2024-07-10 12:37:31.197193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.857 [2024-07-10 12:37:31.197225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:21.857 [2024-07-10 12:37:31.197237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.669 ms 00:35:21.857 [2024-07-10 12:37:31.197250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.857 [2024-07-10 12:37:31.197338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.857 [2024-07-10 12:37:31.197357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:21.857 [2024-07-10 12:37:31.197368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:21.857 [2024-07-10 12:37:31.197379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.857 [2024-07-10 12:37:31.197426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.857 [2024-07-10 12:37:31.197438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:21.857 [2024-07-10 12:37:31.197449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:35:21.857 [2024-07-10 12:37:31.197459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.857 [2024-07-10 12:37:31.197485] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:21.857 [2024-07-10 12:37:31.203084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.857 [2024-07-10 12:37:31.203116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:21.857 [2024-07-10 12:37:31.203129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.615 ms 00:35:21.857 [2024-07-10 12:37:31.203139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.857 [2024-07-10 12:37:31.203196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.857 [2024-07-10 12:37:31.203213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:21.857 [2024-07-10 12:37:31.203225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:21.857 [2024-07-10 12:37:31.203236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.857 [2024-07-10 12:37:31.203291] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:21.857 [2024-07-10 12:37:31.203321] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:21.857 [2024-07-10 12:37:31.203372] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:21.857 [2024-07-10 12:37:31.203398] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:35:21.857 [2024-07-10 12:37:31.203484] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:21.857 [2024-07-10 12:37:31.203499] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:21.857 [2024-07-10 12:37:31.203513] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:35:21.857 [2024-07-10 12:37:31.203527] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:21.857 [2024-07-10 12:37:31.203539] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:21.857 [2024-07-10 12:37:31.203552] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:21.857 [2024-07-10 12:37:31.203563] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:21.857 [2024-07-10 12:37:31.203574] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:21.857 [2024-07-10 12:37:31.203584] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:21.857 [2024-07-10 12:37:31.203595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.857 [2024-07-10 12:37:31.203608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:21.857 [2024-07-10 12:37:31.203620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:35:21.857 [2024-07-10 12:37:31.203631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.857 [2024-07-10 12:37:31.203704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.857 [2024-07-10 12:37:31.203716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:21.857 [2024-07-10 12:37:31.203726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:35:21.857 [2024-07-10 12:37:31.203753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.857 [2024-07-10 12:37:31.203839] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:21.857 [2024-07-10 12:37:31.203853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:21.857 [2024-07-10 12:37:31.203868] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:21.857 [2024-07-10 12:37:31.203880] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:21.857 [2024-07-10 12:37:31.203890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:21.857 [2024-07-10 12:37:31.203900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:21.857 [2024-07-10 12:37:31.203910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:21.857 [2024-07-10 12:37:31.203921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:21.857 [2024-07-10 12:37:31.203933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:21.857 [2024-07-10 12:37:31.203943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:21.857 [2024-07-10 12:37:31.203952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:21.857 [2024-07-10 12:37:31.203965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:21.857 [2024-07-10 12:37:31.203975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:21.857 [2024-07-10 12:37:31.203984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:21.857 [2024-07-10 12:37:31.203994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:21.857 [2024-07-10 12:37:31.204004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:21.857 [2024-07-10 12:37:31.204014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:21.857 [2024-07-10 12:37:31.204023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:21.857 [2024-07-10 12:37:31.204033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:21.857 [2024-07-10 12:37:31.204042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:21.857 [2024-07-10 12:37:31.204062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:21.857 [2024-07-10 12:37:31.204072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:21.857 [2024-07-10 12:37:31.204082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:21.857 [2024-07-10 12:37:31.204092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:21.857 [2024-07-10 12:37:31.204110] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:21.857 [2024-07-10 12:37:31.204120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:21.857 [2024-07-10 12:37:31.204130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:21.857 [2024-07-10 12:37:31.204140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:21.857 [2024-07-10 12:37:31.204149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:21.857 [2024-07-10 12:37:31.204159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:21.857 [2024-07-10 12:37:31.204167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:21.857 [2024-07-10 12:37:31.204177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:21.857 [2024-07-10 12:37:31.204186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:21.857 [2024-07-10 12:37:31.204195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:21.857 [2024-07-10 12:37:31.204205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:21.857 [2024-07-10 12:37:31.204214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:21.857 [2024-07-10 12:37:31.204223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:21.857 [2024-07-10 12:37:31.204232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:21.857 [2024-07-10 12:37:31.204241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:21.857 [2024-07-10 12:37:31.204251] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:21.857 [2024-07-10 12:37:31.204260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:21.857 [2024-07-10 12:37:31.204270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:21.857 [2024-07-10 12:37:31.204295] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:21.858 [2024-07-10 12:37:31.204306] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:21.858 [2024-07-10 12:37:31.204318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:21.858 [2024-07-10 12:37:31.204328] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:21.858 [2024-07-10 12:37:31.204339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:21.858 [2024-07-10 12:37:31.204349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:21.858 [2024-07-10 12:37:31.204360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:21.858 [2024-07-10 12:37:31.204370] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:21.858 [2024-07-10 12:37:31.204380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:21.858 [2024-07-10 12:37:31.204389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:21.858 [2024-07-10 12:37:31.204399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:21.858 [2024-07-10 12:37:31.204410] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:21.858 [2024-07-10 12:37:31.204423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:21.858 [2024-07-10 12:37:31.204436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:21.858 [2024-07-10 12:37:31.204447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:21.858 [2024-07-10 12:37:31.204458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:21.858 [2024-07-10 12:37:31.204469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:21.858 [2024-07-10 12:37:31.204481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:21.858 [2024-07-10 12:37:31.204491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:21.858 [2024-07-10 12:37:31.204502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:21.858 [2024-07-10 12:37:31.204513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:21.858 [2024-07-10 12:37:31.204526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:21.858 [2024-07-10 12:37:31.204536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:21.858 [2024-07-10 12:37:31.204547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:21.858 [2024-07-10 12:37:31.204557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:21.858 [2024-07-10 12:37:31.204569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:21.858 [2024-07-10 12:37:31.204580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:21.858 [2024-07-10 12:37:31.204591] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:21.858 [2024-07-10 12:37:31.204603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:21.858 [2024-07-10 12:37:31.204616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:21.858 [2024-07-10 12:37:31.204627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:21.858 [2024-07-10 12:37:31.204638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:21.858 [2024-07-10 12:37:31.204650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:21.858 [2024-07-10 12:37:31.204662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.858 [2024-07-10 12:37:31.204677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:21.858 [2024-07-10 12:37:31.204688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:35:21.858 [2024-07-10 12:37:31.204698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.858 [2024-07-10 12:37:31.262411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.858 [2024-07-10 12:37:31.262463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:21.858 [2024-07-10 12:37:31.262480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.742 ms 00:35:21.858 [2024-07-10 12:37:31.262491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.858 [2024-07-10 12:37:31.262584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.858 [2024-07-10 12:37:31.262596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:21.858 [2024-07-10 12:37:31.262608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:35:21.858 [2024-07-10 12:37:31.262619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.858 [2024-07-10 12:37:31.314742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.858 [2024-07-10 12:37:31.314788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:21.858 [2024-07-10 12:37:31.314805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.138 ms 00:35:21.858 [2024-07-10 12:37:31.314815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.858 [2024-07-10 12:37:31.314864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.858 [2024-07-10 12:37:31.314876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:21.858 [2024-07-10 12:37:31.314888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:21.858 [2024-07-10 12:37:31.314898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.858 [2024-07-10 12:37:31.315378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.858 [2024-07-10 12:37:31.315399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:21.858 [2024-07-10 12:37:31.315410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:35:21.858 [2024-07-10 12:37:31.315420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.858 [2024-07-10 12:37:31.315545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.858 [2024-07-10 12:37:31.315558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:21.858 [2024-07-10 12:37:31.315568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:35:21.858 [2024-07-10 12:37:31.315578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.336687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:22.118 [2024-07-10 12:37:31.336740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:22.118 [2024-07-10 12:37:31.336755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.121 ms 00:35:22.118 [2024-07-10 12:37:31.336766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.357793] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:22.118 [2024-07-10 12:37:31.357838] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:22.118 [2024-07-10 12:37:31.357854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:22.118 [2024-07-10 12:37:31.357866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:22.118 [2024-07-10 12:37:31.357879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.996 ms 00:35:22.118 [2024-07-10 12:37:31.357890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.388237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:22.118 [2024-07-10 12:37:31.388279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:22.118 [2024-07-10 12:37:31.388294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.353 ms 00:35:22.118 [2024-07-10 12:37:31.388320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.407289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:22.118 [2024-07-10 12:37:31.407341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:22.118 [2024-07-10 12:37:31.407355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.953 ms 00:35:22.118 [2024-07-10 12:37:31.407366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.426033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:22.118 [2024-07-10 12:37:31.426072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:22.118 [2024-07-10 12:37:31.426085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.658 ms 00:35:22.118 [2024-07-10 12:37:31.426095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.426949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:22.118 [2024-07-10 12:37:31.426975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:22.118 [2024-07-10 12:37:31.426988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:35:22.118 [2024-07-10 12:37:31.427007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.517177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:22.118 [2024-07-10 12:37:31.517255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:22.118 [2024-07-10 12:37:31.517274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.292 ms 00:35:22.118 [2024-07-10 12:37:31.517286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.529252] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:22.118 [2024-07-10 12:37:31.532381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:22.118 [2024-07-10 12:37:31.532415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:22.118 [2024-07-10 12:37:31.532431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.050 ms 00:35:22.118 [2024-07-10 12:37:31.532442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.532547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:22.118 [2024-07-10 12:37:31.532561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:22.118 [2024-07-10 12:37:31.532573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:35:22.118 [2024-07-10 12:37:31.532583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.534086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:22.118 [2024-07-10 12:37:31.534129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:22.118 [2024-07-10 12:37:31.534142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.463 ms 00:35:22.118 [2024-07-10 12:37:31.534153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.534188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:22.118 [2024-07-10 12:37:31.534199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:22.118 [2024-07-10 12:37:31.534210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:22.118 [2024-07-10 12:37:31.534220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.534286] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:22.118 [2024-07-10 12:37:31.534303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:22.118 [2024-07-10 12:37:31.534314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:22.118 [2024-07-10 12:37:31.534329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:35:22.118 [2024-07-10 12:37:31.534339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.571600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:22.118 [2024-07-10 12:37:31.571639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:22.118 [2024-07-10 12:37:31.571655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.301 ms 00:35:22.118 [2024-07-10 12:37:31.571682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.571770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:22.118 [2024-07-10 12:37:31.571794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:22.118 [2024-07-10 12:37:31.571806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:35:22.118 [2024-07-10 12:37:31.571816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:22.118 [2024-07-10 12:37:31.578029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 409.668 ms, result 0 00:35:52.770  Copying: 1236/1048576 [kB] (1236 kBps) Copying: 7376/1048576 [kB] (6140 kBps) Copying: 43/1024 [MB] (36 MBps) Copying: 80/1024 [MB] (36 MBps) Copying: 116/1024 [MB] (35 MBps) Copying: 153/1024 [MB] (36 MBps) Copying: 191/1024 [MB] (37 MBps) Copying: 228/1024 [MB] (37 MBps) Copying: 265/1024 [MB] (36 MBps) Copying: 302/1024 [MB] (37 MBps) Copying: 340/1024 [MB] (37 MBps) Copying: 377/1024 [MB] (37 MBps) Copying: 415/1024 [MB] (37 MBps) Copying: 451/1024 [MB] (35 MBps) Copying: 487/1024 [MB] (36 MBps) Copying: 523/1024 [MB] (36 MBps) Copying: 559/1024 [MB] (36 MBps) Copying: 596/1024 [MB] (36 MBps) Copying: 632/1024 [MB] (36 MBps) Copying: 669/1024 [MB] (36 MBps) Copying: 706/1024 [MB] (36 MBps) Copying: 744/1024 [MB] (37 MBps) Copying: 781/1024 [MB] (37 MBps) Copying: 818/1024 [MB] (37 MBps) Copying: 852/1024 [MB] (34 MBps) Copying: 886/1024 [MB] (33 MBps) Copying: 921/1024 [MB] (34 MBps) Copying: 956/1024 [MB] (35 MBps) Copying: 989/1024 [MB] (33 MBps) Copying: 1022/1024 [MB] (33 MBps) Copying: 1024/1024 [MB] (average 34 MBps)[2024-07-10 12:38:02.206749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.770 [2024-07-10 12:38:02.206830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:52.770 [2024-07-10 12:38:02.206849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:35:52.770 [2024-07-10 12:38:02.206860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.770 [2024-07-10 12:38:02.206897] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:52.770 [2024-07-10 12:38:02.210682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.770 [2024-07-10 12:38:02.210724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:52.770 [2024-07-10 12:38:02.210745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.769 ms 00:35:52.770 [2024-07-10 12:38:02.210756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.770 [2024-07-10 12:38:02.211075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.770 [2024-07-10 12:38:02.211093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:52.770 [2024-07-10 12:38:02.211106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:35:52.770 [2024-07-10 12:38:02.211117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.770 [2024-07-10 12:38:02.222492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.770 [2024-07-10 12:38:02.222547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:52.770 [2024-07-10 12:38:02.222565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.364 ms 00:35:52.770 [2024-07-10 12:38:02.222577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:52.770 [2024-07-10 12:38:02.227980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:52.770 [2024-07-10 12:38:02.228018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:52.770 [2024-07-10 12:38:02.228032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.370 ms 00:35:52.770 [2024-07-10 12:38:02.228043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.029 [2024-07-10 12:38:02.269313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.029 [2024-07-10 12:38:02.269370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:53.029 [2024-07-10 12:38:02.269389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.238 ms 00:35:53.029 [2024-07-10 12:38:02.269399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.029 [2024-07-10 12:38:02.291743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.029 [2024-07-10 12:38:02.291821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:53.029 [2024-07-10 12:38:02.291840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.332 ms 00:35:53.029 [2024-07-10 12:38:02.291850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.029 [2024-07-10 12:38:02.296301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.029 [2024-07-10 12:38:02.296344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:53.029 [2024-07-10 12:38:02.296360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.408 ms 00:35:53.029 [2024-07-10 12:38:02.296371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.029 [2024-07-10 12:38:02.335179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.029 [2024-07-10 12:38:02.335229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:35:53.029 [2024-07-10 12:38:02.335246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.851 ms 00:35:53.029 [2024-07-10 12:38:02.335256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.029 [2024-07-10 12:38:02.372295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.029 [2024-07-10 12:38:02.372339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:35:53.029 [2024-07-10 12:38:02.372356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.053 ms 00:35:53.029 [2024-07-10 12:38:02.372367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.029 [2024-07-10 12:38:02.408857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.029 [2024-07-10 12:38:02.408904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:53.029 [2024-07-10 12:38:02.408921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.506 ms 00:35:53.029 [2024-07-10 12:38:02.408949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.029 [2024-07-10 12:38:02.445741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.029 [2024-07-10 12:38:02.445787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:53.029 [2024-07-10 12:38:02.445804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.762 ms 00:35:53.029 [2024-07-10 12:38:02.445815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.029 [2024-07-10 12:38:02.445857] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:53.029 [2024-07-10 12:38:02.445875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:35:53.029 [2024-07-10 12:38:02.445890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:35:53.029 [2024-07-10 12:38:02.445902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.445915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.445927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.445939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.445951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.445962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.445973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.445984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.445995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:53.029 [2024-07-10 12:38:02.446176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.446994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.447005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:53.030 [2024-07-10 12:38:02.447024] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:53.030 [2024-07-10 12:38:02.447035] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af10987c-0e85-4376-ab8d-dcd11810374e 00:35:53.030 [2024-07-10 12:38:02.447047] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:35:53.030 [2024-07-10 12:38:02.447058] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 163776 00:35:53.030 [2024-07-10 12:38:02.447068] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 161792 00:35:53.030 [2024-07-10 12:38:02.447086] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0123 00:35:53.030 [2024-07-10 12:38:02.447096] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:53.030 [2024-07-10 12:38:02.447111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:53.030 [2024-07-10 12:38:02.447122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:53.030 [2024-07-10 12:38:02.447131] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:53.030 [2024-07-10 12:38:02.447140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:53.031 [2024-07-10 12:38:02.447151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.031 [2024-07-10 12:38:02.447161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:53.031 [2024-07-10 12:38:02.447172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.297 ms 00:35:53.031 [2024-07-10 12:38:02.447182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.031 [2024-07-10 12:38:02.468151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.031 [2024-07-10 12:38:02.468194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:53.031 [2024-07-10 12:38:02.468209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.962 ms 00:35:53.031 [2024-07-10 12:38:02.468238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.031 [2024-07-10 12:38:02.468771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:53.031 [2024-07-10 12:38:02.468794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:53.031 [2024-07-10 12:38:02.468806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:35:53.031 [2024-07-10 12:38:02.468817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.297 [2024-07-10 12:38:02.513433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.297 [2024-07-10 12:38:02.513477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:53.297 [2024-07-10 12:38:02.513498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.297 [2024-07-10 12:38:02.513509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.297 [2024-07-10 12:38:02.513574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.297 [2024-07-10 12:38:02.513585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:53.297 [2024-07-10 12:38:02.513596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.297 [2024-07-10 12:38:02.513606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.297 [2024-07-10 12:38:02.513684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.297 [2024-07-10 12:38:02.513698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:53.297 [2024-07-10 12:38:02.513710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.297 [2024-07-10 12:38:02.513726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.297 [2024-07-10 12:38:02.513762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.297 [2024-07-10 12:38:02.513773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:53.297 [2024-07-10 12:38:02.513784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.297 [2024-07-10 12:38:02.513795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.297 [2024-07-10 12:38:02.640881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.297 [2024-07-10 12:38:02.640955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:53.297 [2024-07-10 12:38:02.640982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.297 [2024-07-10 12:38:02.640994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.297 [2024-07-10 12:38:02.744984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.297 [2024-07-10 12:38:02.745062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:53.297 [2024-07-10 12:38:02.745080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.297 [2024-07-10 12:38:02.745092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.297 [2024-07-10 12:38:02.745172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.297 [2024-07-10 12:38:02.745184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:53.297 [2024-07-10 12:38:02.745196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.297 [2024-07-10 12:38:02.745207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.297 [2024-07-10 12:38:02.745257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.297 [2024-07-10 12:38:02.745269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:53.297 [2024-07-10 12:38:02.745280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.297 [2024-07-10 12:38:02.745291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.297 [2024-07-10 12:38:02.745414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.297 [2024-07-10 12:38:02.745428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:53.297 [2024-07-10 12:38:02.745439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.297 [2024-07-10 12:38:02.745450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.297 [2024-07-10 12:38:02.745497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.297 [2024-07-10 12:38:02.745511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:53.297 [2024-07-10 12:38:02.745522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.297 [2024-07-10 12:38:02.745532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.297 [2024-07-10 12:38:02.745574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.297 [2024-07-10 12:38:02.745586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:53.297 [2024-07-10 12:38:02.745597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.297 [2024-07-10 12:38:02.745608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.297 [2024-07-10 12:38:02.745660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:53.297 [2024-07-10 12:38:02.745672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:53.297 [2024-07-10 12:38:02.745683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:53.297 [2024-07-10 12:38:02.745694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:53.297 [2024-07-10 12:38:02.745863] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 539.953 ms, result 0 00:35:54.738 00:35:54.738 00:35:54.738 12:38:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:56.640 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:56.640 12:38:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:56.640 [2024-07-10 12:38:05.860899] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:35:56.640 [2024-07-10 12:38:05.861033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85523 ] 00:35:56.640 [2024-07-10 12:38:06.033369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.898 [2024-07-10 12:38:06.281743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.466 [2024-07-10 12:38:06.689070] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:57.466 [2024-07-10 12:38:06.689147] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:57.466 [2024-07-10 12:38:06.852394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.466 [2024-07-10 12:38:06.852461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:57.466 [2024-07-10 12:38:06.852478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:57.466 [2024-07-10 12:38:06.852489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.466 [2024-07-10 12:38:06.852554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.466 [2024-07-10 12:38:06.852568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:57.466 [2024-07-10 12:38:06.852580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:35:57.466 [2024-07-10 12:38:06.852594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.466 [2024-07-10 12:38:06.852615] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:57.466 [2024-07-10 12:38:06.853665] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:57.466 [2024-07-10 12:38:06.853692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.466 [2024-07-10 12:38:06.853706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:57.466 [2024-07-10 12:38:06.853718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:35:57.466 [2024-07-10 12:38:06.853742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.466 [2024-07-10 12:38:06.855196] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:57.466 [2024-07-10 12:38:06.875159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.466 [2024-07-10 12:38:06.875218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:57.466 [2024-07-10 12:38:06.875234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.996 ms 00:35:57.466 [2024-07-10 12:38:06.875245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.466 [2024-07-10 12:38:06.875313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.466 [2024-07-10 12:38:06.875326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:57.466 [2024-07-10 12:38:06.875341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:35:57.466 [2024-07-10 12:38:06.875351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.466 [2024-07-10 12:38:06.882170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.466 [2024-07-10 12:38:06.882200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:57.466 [2024-07-10 12:38:06.882212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.758 ms 00:35:57.466 [2024-07-10 12:38:06.882223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.466 [2024-07-10 12:38:06.882311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.466 [2024-07-10 12:38:06.882328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:57.466 [2024-07-10 12:38:06.882339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:35:57.466 [2024-07-10 12:38:06.882358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.466 [2024-07-10 12:38:06.882403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.466 [2024-07-10 12:38:06.882416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:57.466 [2024-07-10 12:38:06.882427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:57.466 [2024-07-10 12:38:06.882437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.466 [2024-07-10 12:38:06.882462] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:57.466 [2024-07-10 12:38:06.888214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.466 [2024-07-10 12:38:06.888246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:57.466 [2024-07-10 12:38:06.888258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.767 ms 00:35:57.466 [2024-07-10 12:38:06.888269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.466 [2024-07-10 12:38:06.888306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.466 [2024-07-10 12:38:06.888317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:57.467 [2024-07-10 12:38:06.888329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:57.467 [2024-07-10 12:38:06.888338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.467 [2024-07-10 12:38:06.888391] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:57.467 [2024-07-10 12:38:06.888416] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:57.467 [2024-07-10 12:38:06.888452] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:57.467 [2024-07-10 12:38:06.888474] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:35:57.467 [2024-07-10 12:38:06.888560] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:57.467 [2024-07-10 12:38:06.888574] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:57.467 [2024-07-10 12:38:06.888587] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:35:57.467 [2024-07-10 12:38:06.888600] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:57.467 [2024-07-10 12:38:06.888612] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:57.467 [2024-07-10 12:38:06.888623] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:57.467 [2024-07-10 12:38:06.888633] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:57.467 [2024-07-10 12:38:06.888643] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:57.467 [2024-07-10 12:38:06.888654] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:57.467 [2024-07-10 12:38:06.888664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.467 [2024-07-10 12:38:06.888678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:57.467 [2024-07-10 12:38:06.888689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:35:57.467 [2024-07-10 12:38:06.888698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.467 [2024-07-10 12:38:06.888778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.467 [2024-07-10 12:38:06.888790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:57.467 [2024-07-10 12:38:06.888800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:35:57.467 [2024-07-10 12:38:06.888810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.467 [2024-07-10 12:38:06.888904] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:57.467 [2024-07-10 12:38:06.888917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:57.467 [2024-07-10 12:38:06.888931] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:57.467 [2024-07-10 12:38:06.888941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:57.467 [2024-07-10 12:38:06.888951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:57.467 [2024-07-10 12:38:06.888961] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:57.467 [2024-07-10 12:38:06.888970] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:57.467 [2024-07-10 12:38:06.888980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:57.467 [2024-07-10 12:38:06.888989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:57.467 [2024-07-10 12:38:06.888999] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:57.467 [2024-07-10 12:38:06.889009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:57.467 [2024-07-10 12:38:06.889019] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:57.467 [2024-07-10 12:38:06.889028] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:57.467 [2024-07-10 12:38:06.889037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:57.467 [2024-07-10 12:38:06.889046] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:57.467 [2024-07-10 12:38:06.889055] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:57.467 [2024-07-10 12:38:06.889065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:57.467 [2024-07-10 12:38:06.889074] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:57.467 [2024-07-10 12:38:06.889082] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:57.467 [2024-07-10 12:38:06.889092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:57.467 [2024-07-10 12:38:06.889113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:57.467 [2024-07-10 12:38:06.889122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:57.467 [2024-07-10 12:38:06.889131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:57.467 [2024-07-10 12:38:06.889140] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:57.467 [2024-07-10 12:38:06.889149] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:57.467 [2024-07-10 12:38:06.889158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:57.467 [2024-07-10 12:38:06.889168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:57.467 [2024-07-10 12:38:06.889177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:57.467 [2024-07-10 12:38:06.889186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:57.467 [2024-07-10 12:38:06.889196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:57.467 [2024-07-10 12:38:06.889205] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:57.467 [2024-07-10 12:38:06.889214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:57.467 [2024-07-10 12:38:06.889223] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:57.467 [2024-07-10 12:38:06.889232] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:57.467 [2024-07-10 12:38:06.889241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:57.467 [2024-07-10 12:38:06.889251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:57.467 [2024-07-10 12:38:06.889260] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:57.467 [2024-07-10 12:38:06.889269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:57.467 [2024-07-10 12:38:06.889278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:57.467 [2024-07-10 12:38:06.889287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:57.467 [2024-07-10 12:38:06.889296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:57.467 [2024-07-10 12:38:06.889307] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:57.467 [2024-07-10 12:38:06.889316] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:57.467 [2024-07-10 12:38:06.889326] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:57.467 [2024-07-10 12:38:06.889337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:57.467 [2024-07-10 12:38:06.889347] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:57.467 [2024-07-10 12:38:06.889357] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:57.467 [2024-07-10 12:38:06.889368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:57.467 [2024-07-10 12:38:06.889377] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:57.467 [2024-07-10 12:38:06.889387] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:57.467 [2024-07-10 12:38:06.889396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:57.467 [2024-07-10 12:38:06.889406] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:57.467 [2024-07-10 12:38:06.889416] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:57.467 [2024-07-10 12:38:06.889426] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:57.467 [2024-07-10 12:38:06.889437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:57.467 [2024-07-10 12:38:06.889449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:57.467 [2024-07-10 12:38:06.889460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:57.467 [2024-07-10 12:38:06.889470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:57.467 [2024-07-10 12:38:06.889481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:57.467 [2024-07-10 12:38:06.889491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:57.467 [2024-07-10 12:38:06.889502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:57.467 [2024-07-10 12:38:06.889512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:57.467 [2024-07-10 12:38:06.889522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:57.467 [2024-07-10 12:38:06.889532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:57.467 [2024-07-10 12:38:06.889543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:57.467 [2024-07-10 12:38:06.889553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:57.467 [2024-07-10 12:38:06.889564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:57.467 [2024-07-10 12:38:06.889574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:57.467 [2024-07-10 12:38:06.889585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:57.467 [2024-07-10 12:38:06.889595] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:57.467 [2024-07-10 12:38:06.889606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:57.467 [2024-07-10 12:38:06.889617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:57.467 [2024-07-10 12:38:06.889628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:57.467 [2024-07-10 12:38:06.889639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:57.467 [2024-07-10 12:38:06.889650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:57.467 [2024-07-10 12:38:06.889661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.467 [2024-07-10 12:38:06.889677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:57.467 [2024-07-10 12:38:06.889687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:35:57.467 [2024-07-10 12:38:06.889696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.727 [2024-07-10 12:38:06.945573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.727 [2024-07-10 12:38:06.945635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:57.727 [2024-07-10 12:38:06.945651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.907 ms 00:35:57.727 [2024-07-10 12:38:06.945663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.727 [2024-07-10 12:38:06.945783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.727 [2024-07-10 12:38:06.945796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:57.727 [2024-07-10 12:38:06.945807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:35:57.727 [2024-07-10 12:38:06.945816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.727 [2024-07-10 12:38:06.998083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.727 [2024-07-10 12:38:06.998143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:57.727 [2024-07-10 12:38:06.998160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.258 ms 00:35:57.727 [2024-07-10 12:38:06.998171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.727 [2024-07-10 12:38:06.998234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.727 [2024-07-10 12:38:06.998246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:57.727 [2024-07-10 12:38:06.998258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:57.727 [2024-07-10 12:38:06.998269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.727 [2024-07-10 12:38:06.998782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.727 [2024-07-10 12:38:06.998798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:57.727 [2024-07-10 12:38:06.998810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:35:57.727 [2024-07-10 12:38:06.998820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.727 [2024-07-10 12:38:06.998945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.727 [2024-07-10 12:38:06.998960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:57.727 [2024-07-10 12:38:06.998971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:35:57.727 [2024-07-10 12:38:06.998982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.727 [2024-07-10 12:38:07.018784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.727 [2024-07-10 12:38:07.018837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:57.727 [2024-07-10 12:38:07.018854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.812 ms 00:35:57.727 [2024-07-10 12:38:07.018866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.727 [2024-07-10 12:38:07.038397] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:35:57.727 [2024-07-10 12:38:07.038447] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:57.727 [2024-07-10 12:38:07.038464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.727 [2024-07-10 12:38:07.038475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:57.727 [2024-07-10 12:38:07.038488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.485 ms 00:35:57.727 [2024-07-10 12:38:07.038499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.727 [2024-07-10 12:38:07.067646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.727 [2024-07-10 12:38:07.067693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:57.727 [2024-07-10 12:38:07.067709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.145 ms 00:35:57.727 [2024-07-10 12:38:07.067727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.727 [2024-07-10 12:38:07.085935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.727 [2024-07-10 12:38:07.085976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:57.727 [2024-07-10 12:38:07.085991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.168 ms 00:35:57.727 [2024-07-10 12:38:07.086002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.727 [2024-07-10 12:38:07.104049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.727 [2024-07-10 12:38:07.104089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:57.727 [2024-07-10 12:38:07.104110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.035 ms 00:35:57.727 [2024-07-10 12:38:07.104120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.727 [2024-07-10 12:38:07.105025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.727 [2024-07-10 12:38:07.105050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:57.727 [2024-07-10 12:38:07.105064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:35:57.727 [2024-07-10 12:38:07.105076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.727 [2024-07-10 12:38:07.192258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.728 [2024-07-10 12:38:07.192322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:57.728 [2024-07-10 12:38:07.192341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.298 ms 00:35:57.728 [2024-07-10 12:38:07.192353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.728 [2024-07-10 12:38:07.205259] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:57.997 [2024-07-10 12:38:07.208561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.997 [2024-07-10 12:38:07.208596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:57.997 [2024-07-10 12:38:07.208613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.162 ms 00:35:57.997 [2024-07-10 12:38:07.208624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.997 [2024-07-10 12:38:07.208758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.997 [2024-07-10 12:38:07.208773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:57.997 [2024-07-10 12:38:07.208785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:35:57.997 [2024-07-10 12:38:07.208795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.997 [2024-07-10 12:38:07.209680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.997 [2024-07-10 12:38:07.209704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:57.997 [2024-07-10 12:38:07.209717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:35:57.997 [2024-07-10 12:38:07.209742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.997 [2024-07-10 12:38:07.209769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.997 [2024-07-10 12:38:07.209781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:57.997 [2024-07-10 12:38:07.209792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:57.997 [2024-07-10 12:38:07.209802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.997 [2024-07-10 12:38:07.209841] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:57.997 [2024-07-10 12:38:07.209853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.997 [2024-07-10 12:38:07.209863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:57.997 [2024-07-10 12:38:07.209876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:35:57.997 [2024-07-10 12:38:07.209886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.997 [2024-07-10 12:38:07.246148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.997 [2024-07-10 12:38:07.246199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:57.997 [2024-07-10 12:38:07.246216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.299 ms 00:35:57.997 [2024-07-10 12:38:07.246227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.997 [2024-07-10 12:38:07.246310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.997 [2024-07-10 12:38:07.246331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:57.997 [2024-07-10 12:38:07.246343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:35:57.997 [2024-07-10 12:38:07.246354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.998 [2024-07-10 12:38:07.247558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.338 ms, result 0 00:36:33.164  Copying: 32/1024 [MB] (32 MBps) Copying: 63/1024 [MB] (30 MBps) Copying: 92/1024 [MB] (29 MBps) Copying: 122/1024 [MB] (29 MBps) Copying: 153/1024 [MB] (30 MBps) Copying: 186/1024 [MB] (33 MBps) Copying: 218/1024 [MB] (32 MBps) Copying: 250/1024 [MB] (31 MBps) Copying: 281/1024 [MB] (31 MBps) Copying: 311/1024 [MB] (30 MBps) Copying: 342/1024 [MB] (30 MBps) Copying: 370/1024 [MB] (28 MBps) Copying: 400/1024 [MB] (29 MBps) Copying: 429/1024 [MB] (29 MBps) Copying: 459/1024 [MB] (29 MBps) Copying: 488/1024 [MB] (29 MBps) Copying: 517/1024 [MB] (28 MBps) Copying: 546/1024 [MB] (29 MBps) Copying: 575/1024 [MB] (29 MBps) Copying: 605/1024 [MB] (29 MBps) Copying: 634/1024 [MB] (28 MBps) Copying: 661/1024 [MB] (26 MBps) Copying: 689/1024 [MB] (28 MBps) Copying: 715/1024 [MB] (25 MBps) Copying: 741/1024 [MB] (25 MBps) Copying: 768/1024 [MB] (27 MBps) Copying: 795/1024 [MB] (27 MBps) Copying: 822/1024 [MB] (27 MBps) Copying: 850/1024 [MB] (27 MBps) Copying: 879/1024 [MB] (28 MBps) Copying: 907/1024 [MB] (27 MBps) Copying: 934/1024 [MB] (27 MBps) Copying: 963/1024 [MB] (28 MBps) Copying: 991/1024 [MB] (28 MBps) Copying: 1021/1024 [MB] (29 MBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-10 12:38:42.507716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.164 [2024-07-10 12:38:42.507814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:33.164 [2024-07-10 12:38:42.507832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:33.164 [2024-07-10 12:38:42.507843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.164 [2024-07-10 12:38:42.507867] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:33.164 [2024-07-10 12:38:42.511958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.164 [2024-07-10 12:38:42.512004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:33.164 [2024-07-10 12:38:42.512020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.078 ms 00:36:33.164 [2024-07-10 12:38:42.512032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.164 [2024-07-10 12:38:42.512260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.164 [2024-07-10 12:38:42.512274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:33.164 [2024-07-10 12:38:42.512286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:36:33.164 [2024-07-10 12:38:42.512297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.164 [2024-07-10 12:38:42.514972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.164 [2024-07-10 12:38:42.514999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:33.164 [2024-07-10 12:38:42.515011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.665 ms 00:36:33.164 [2024-07-10 12:38:42.515022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.164 [2024-07-10 12:38:42.520188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.164 [2024-07-10 12:38:42.520224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:33.164 [2024-07-10 12:38:42.520243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.155 ms 00:36:33.164 [2024-07-10 12:38:42.520253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.164 [2024-07-10 12:38:42.558530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.164 [2024-07-10 12:38:42.558592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:33.164 [2024-07-10 12:38:42.558608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.275 ms 00:36:33.164 [2024-07-10 12:38:42.558619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.164 [2024-07-10 12:38:42.581020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.164 [2024-07-10 12:38:42.581069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:33.164 [2024-07-10 12:38:42.581085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.394 ms 00:36:33.164 [2024-07-10 12:38:42.581097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.164 [2024-07-10 12:38:42.585266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.164 [2024-07-10 12:38:42.585306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:33.164 [2024-07-10 12:38:42.585319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.128 ms 00:36:33.164 [2024-07-10 12:38:42.585337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.164 [2024-07-10 12:38:42.627666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.164 [2024-07-10 12:38:42.627756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:36:33.164 [2024-07-10 12:38:42.627775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.374 ms 00:36:33.164 [2024-07-10 12:38:42.627786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.423 [2024-07-10 12:38:42.671627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.423 [2024-07-10 12:38:42.671705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:36:33.423 [2024-07-10 12:38:42.671724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.816 ms 00:36:33.423 [2024-07-10 12:38:42.671751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.423 [2024-07-10 12:38:42.714932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.423 [2024-07-10 12:38:42.715015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:33.423 [2024-07-10 12:38:42.715053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.156 ms 00:36:33.424 [2024-07-10 12:38:42.715063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.424 [2024-07-10 12:38:42.756623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.424 [2024-07-10 12:38:42.756699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:33.424 [2024-07-10 12:38:42.756717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.475 ms 00:36:33.424 [2024-07-10 12:38:42.756734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.424 [2024-07-10 12:38:42.756815] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:33.424 [2024-07-10 12:38:42.756835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:33.424 [2024-07-10 12:38:42.756849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:36:33.424 [2024-07-10 12:38:42.756862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.756873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.756884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.756895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.756908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.756919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.756931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.756942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.756954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.756965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.756975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.756986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.756997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:33.424 [2024-07-10 12:38:42.757837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:33.425 [2024-07-10 12:38:42.757848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:33.425 [2024-07-10 12:38:42.757859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:33.425 [2024-07-10 12:38:42.757870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:33.425 [2024-07-10 12:38:42.757882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:33.425 [2024-07-10 12:38:42.757893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:33.425 [2024-07-10 12:38:42.757904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:33.425 [2024-07-10 12:38:42.757915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:33.425 [2024-07-10 12:38:42.757926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:33.425 [2024-07-10 12:38:42.757937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:33.425 [2024-07-10 12:38:42.757955] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:33.425 [2024-07-10 12:38:42.757965] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: af10987c-0e85-4376-ab8d-dcd11810374e 00:36:33.425 [2024-07-10 12:38:42.757977] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:36:33.425 [2024-07-10 12:38:42.757987] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:36:33.425 [2024-07-10 12:38:42.758008] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:33.425 [2024-07-10 12:38:42.758018] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:33.425 [2024-07-10 12:38:42.758028] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:33.425 [2024-07-10 12:38:42.758038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:33.425 [2024-07-10 12:38:42.758048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:33.425 [2024-07-10 12:38:42.758057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:33.425 [2024-07-10 12:38:42.758066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:33.425 [2024-07-10 12:38:42.758076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.425 [2024-07-10 12:38:42.758087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:33.425 [2024-07-10 12:38:42.758097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.266 ms 00:36:33.425 [2024-07-10 12:38:42.758107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.425 [2024-07-10 12:38:42.779531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.425 [2024-07-10 12:38:42.779601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:33.425 [2024-07-10 12:38:42.779629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.393 ms 00:36:33.425 [2024-07-10 12:38:42.779641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.425 [2024-07-10 12:38:42.780213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:33.425 [2024-07-10 12:38:42.780244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:33.425 [2024-07-10 12:38:42.780263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:36:33.425 [2024-07-10 12:38:42.780279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.425 [2024-07-10 12:38:42.824801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.425 [2024-07-10 12:38:42.824877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:33.425 [2024-07-10 12:38:42.824895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.425 [2024-07-10 12:38:42.824906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.425 [2024-07-10 12:38:42.824989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.425 [2024-07-10 12:38:42.825000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:33.425 [2024-07-10 12:38:42.825011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.425 [2024-07-10 12:38:42.825022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.425 [2024-07-10 12:38:42.825129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.425 [2024-07-10 12:38:42.825143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:33.425 [2024-07-10 12:38:42.825153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.425 [2024-07-10 12:38:42.825163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.425 [2024-07-10 12:38:42.825182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.425 [2024-07-10 12:38:42.825192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:33.425 [2024-07-10 12:38:42.825203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.425 [2024-07-10 12:38:42.825213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.693 [2024-07-10 12:38:42.946044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.693 [2024-07-10 12:38:42.946119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:33.693 [2024-07-10 12:38:42.946136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.693 [2024-07-10 12:38:42.946147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.693 [2024-07-10 12:38:43.050624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.693 [2024-07-10 12:38:43.050690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:33.693 [2024-07-10 12:38:43.050706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.693 [2024-07-10 12:38:43.050717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.693 [2024-07-10 12:38:43.050809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.693 [2024-07-10 12:38:43.050829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:33.693 [2024-07-10 12:38:43.050840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.693 [2024-07-10 12:38:43.050850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.693 [2024-07-10 12:38:43.050891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.693 [2024-07-10 12:38:43.050903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:33.693 [2024-07-10 12:38:43.050913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.693 [2024-07-10 12:38:43.050923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.693 [2024-07-10 12:38:43.051034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.693 [2024-07-10 12:38:43.051061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:33.693 [2024-07-10 12:38:43.051072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.693 [2024-07-10 12:38:43.051082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.693 [2024-07-10 12:38:43.051121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.693 [2024-07-10 12:38:43.051133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:33.693 [2024-07-10 12:38:43.051144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.693 [2024-07-10 12:38:43.051154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.693 [2024-07-10 12:38:43.051194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.693 [2024-07-10 12:38:43.051205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:33.693 [2024-07-10 12:38:43.051220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.693 [2024-07-10 12:38:43.051230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.693 [2024-07-10 12:38:43.051282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:33.693 [2024-07-10 12:38:43.051294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:33.693 [2024-07-10 12:38:43.051304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:33.693 [2024-07-10 12:38:43.051315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:33.693 [2024-07-10 12:38:43.051442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 544.576 ms, result 0 00:36:35.067 00:36:35.067 00:36:35.067 12:38:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:36:36.982 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:36:36.983 12:38:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:36:36.983 12:38:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:36:36.983 12:38:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:36.983 12:38:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:36.983 12:38:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:36:36.983 12:38:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:36.983 12:38:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:36:36.983 12:38:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 83822 00:36:36.983 12:38:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 83822 ']' 00:36:36.983 12:38:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 83822 00:36:36.983 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (83822) - No such process 00:36:36.983 Process with pid 83822 is not found 00:36:36.983 12:38:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 83822 is not found' 00:36:36.983 12:38:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:36:37.241 Remove shared memory files 00:36:37.241 12:38:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:36:37.241 12:38:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:37.241 12:38:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:36:37.241 12:38:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:36:37.241 12:38:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:36:37.241 12:38:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:37.241 12:38:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:36:37.241 ************************************ 00:36:37.241 END TEST ftl_dirty_shutdown 00:36:37.241 ************************************ 00:36:37.241 00:36:37.241 real 3m24.196s 00:36:37.241 user 3m53.603s 00:36:37.241 sys 0m36.407s 00:36:37.241 12:38:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:37.241 12:38:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:37.500 12:38:46 ftl -- common/autotest_common.sh@1142 -- # return 0 00:36:37.500 12:38:46 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:36:37.500 12:38:46 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:36:37.500 12:38:46 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:37.500 12:38:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:37.500 ************************************ 00:36:37.500 START TEST ftl_upgrade_shutdown 00:36:37.500 ************************************ 00:36:37.500 12:38:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:36:37.500 * Looking for test storage... 00:36:37.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:36:37.500 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:36:37.500 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85992 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85992 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 85992 ']' 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:37.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:37.501 12:38:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:37.760 [2024-07-10 12:38:47.027692] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:36:37.760 [2024-07-10 12:38:47.027856] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85992 ] 00:36:37.760 [2024-07-10 12:38:47.202372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:38.019 [2024-07-10 12:38:47.449492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:36:38.954 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:36:39.212 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:36:39.212 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:36:39.212 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:36:39.212 12:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:36:39.212 12:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:36:39.212 12:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:36:39.212 12:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:36:39.212 12:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:36:39.471 12:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:36:39.471 { 00:36:39.471 "name": "basen1", 00:36:39.471 "aliases": [ 00:36:39.471 "453e2dda-db4c-4dd1-b4a9-70bd3e1d570d" 00:36:39.471 ], 00:36:39.471 "product_name": "NVMe disk", 00:36:39.471 "block_size": 4096, 00:36:39.471 "num_blocks": 1310720, 00:36:39.471 "uuid": "453e2dda-db4c-4dd1-b4a9-70bd3e1d570d", 00:36:39.471 "assigned_rate_limits": { 00:36:39.471 "rw_ios_per_sec": 0, 00:36:39.471 "rw_mbytes_per_sec": 0, 00:36:39.471 "r_mbytes_per_sec": 0, 00:36:39.471 "w_mbytes_per_sec": 0 00:36:39.471 }, 00:36:39.471 "claimed": true, 00:36:39.471 "claim_type": "read_many_write_one", 00:36:39.471 "zoned": false, 00:36:39.471 "supported_io_types": { 00:36:39.471 "read": true, 00:36:39.471 "write": true, 00:36:39.471 "unmap": true, 00:36:39.471 "flush": true, 00:36:39.471 "reset": true, 00:36:39.471 "nvme_admin": true, 00:36:39.471 "nvme_io": true, 00:36:39.471 "nvme_io_md": false, 00:36:39.471 "write_zeroes": true, 00:36:39.471 "zcopy": false, 00:36:39.471 "get_zone_info": false, 00:36:39.471 "zone_management": false, 00:36:39.471 "zone_append": false, 00:36:39.471 "compare": true, 00:36:39.471 "compare_and_write": false, 00:36:39.471 "abort": true, 00:36:39.471 "seek_hole": false, 00:36:39.471 "seek_data": false, 00:36:39.471 "copy": true, 00:36:39.471 "nvme_iov_md": false 00:36:39.471 }, 00:36:39.471 "driver_specific": { 00:36:39.471 "nvme": [ 00:36:39.471 { 00:36:39.471 "pci_address": "0000:00:11.0", 00:36:39.471 "trid": { 00:36:39.471 "trtype": "PCIe", 00:36:39.471 "traddr": "0000:00:11.0" 00:36:39.471 }, 00:36:39.471 "ctrlr_data": { 00:36:39.471 "cntlid": 0, 00:36:39.471 "vendor_id": "0x1b36", 00:36:39.471 "model_number": "QEMU NVMe Ctrl", 00:36:39.471 "serial_number": "12341", 00:36:39.471 "firmware_revision": "8.0.0", 00:36:39.471 "subnqn": "nqn.2019-08.org.qemu:12341", 00:36:39.471 "oacs": { 00:36:39.471 "security": 0, 00:36:39.471 "format": 1, 00:36:39.471 "firmware": 0, 00:36:39.471 "ns_manage": 1 00:36:39.471 }, 00:36:39.471 "multi_ctrlr": false, 00:36:39.471 "ana_reporting": false 00:36:39.471 }, 00:36:39.471 "vs": { 00:36:39.471 "nvme_version": "1.4" 00:36:39.471 }, 00:36:39.471 "ns_data": { 00:36:39.471 "id": 1, 00:36:39.471 "can_share": false 00:36:39.471 } 00:36:39.471 } 00:36:39.471 ], 00:36:39.471 "mp_policy": "active_passive" 00:36:39.471 } 00:36:39.471 } 00:36:39.471 ]' 00:36:39.471 12:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:36:39.471 12:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:36:39.471 12:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:36:39.729 12:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:36:39.729 12:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:36:39.729 12:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:36:39.729 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:36:39.729 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:36:39.729 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:36:39.730 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:39.730 12:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:39.730 12:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=82721e7a-8dd4-414f-848a-2b8510d4dc58 00:36:39.730 12:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:36:39.730 12:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 82721e7a-8dd4-414f-848a-2b8510d4dc58 00:36:39.989 12:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:36:40.248 12:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=88d3ac39-d473-4272-9652-09bb434bf184 00:36:40.248 12:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 88d3ac39-d473-4272-9652-09bb434bf184 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=a62f6ab3-a1b4-430b-bc5e-d2f5d785d692 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z a62f6ab3-a1b4-430b-bc5e-d2f5d785d692 ]] 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 a62f6ab3-a1b4-430b-bc5e-d2f5d785d692 5120 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=a62f6ab3-a1b4-430b-bc5e-d2f5d785d692 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size a62f6ab3-a1b4-430b-bc5e-d2f5d785d692 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=a62f6ab3-a1b4-430b-bc5e-d2f5d785d692 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a62f6ab3-a1b4-430b-bc5e-d2f5d785d692 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:36:40.506 { 00:36:40.506 "name": "a62f6ab3-a1b4-430b-bc5e-d2f5d785d692", 00:36:40.506 "aliases": [ 00:36:40.506 "lvs/basen1p0" 00:36:40.506 ], 00:36:40.506 "product_name": "Logical Volume", 00:36:40.506 "block_size": 4096, 00:36:40.506 "num_blocks": 5242880, 00:36:40.506 "uuid": "a62f6ab3-a1b4-430b-bc5e-d2f5d785d692", 00:36:40.506 "assigned_rate_limits": { 00:36:40.506 "rw_ios_per_sec": 0, 00:36:40.506 "rw_mbytes_per_sec": 0, 00:36:40.506 "r_mbytes_per_sec": 0, 00:36:40.506 "w_mbytes_per_sec": 0 00:36:40.506 }, 00:36:40.506 "claimed": false, 00:36:40.506 "zoned": false, 00:36:40.506 "supported_io_types": { 00:36:40.506 "read": true, 00:36:40.506 "write": true, 00:36:40.506 "unmap": true, 00:36:40.506 "flush": false, 00:36:40.506 "reset": true, 00:36:40.506 "nvme_admin": false, 00:36:40.506 "nvme_io": false, 00:36:40.506 "nvme_io_md": false, 00:36:40.506 "write_zeroes": true, 00:36:40.506 "zcopy": false, 00:36:40.506 "get_zone_info": false, 00:36:40.506 "zone_management": false, 00:36:40.506 "zone_append": false, 00:36:40.506 "compare": false, 00:36:40.506 "compare_and_write": false, 00:36:40.506 "abort": false, 00:36:40.506 "seek_hole": true, 00:36:40.506 "seek_data": true, 00:36:40.506 "copy": false, 00:36:40.506 "nvme_iov_md": false 00:36:40.506 }, 00:36:40.506 "driver_specific": { 00:36:40.506 "lvol": { 00:36:40.506 "lvol_store_uuid": "88d3ac39-d473-4272-9652-09bb434bf184", 00:36:40.506 "base_bdev": "basen1", 00:36:40.506 "thin_provision": true, 00:36:40.506 "num_allocated_clusters": 0, 00:36:40.506 "snapshot": false, 00:36:40.506 "clone": false, 00:36:40.506 "esnap_clone": false 00:36:40.506 } 00:36:40.506 } 00:36:40.506 } 00:36:40.506 ]' 00:36:40.506 12:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:36:40.763 12:38:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:36:40.763 12:38:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:36:40.763 12:38:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:36:40.763 12:38:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:36:40.763 12:38:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:36:40.763 12:38:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:36:40.763 12:38:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:36:40.763 12:38:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:36:41.020 12:38:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:36:41.020 12:38:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:36:41.020 12:38:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:36:41.279 12:38:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:36:41.279 12:38:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:36:41.279 12:38:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d a62f6ab3-a1b4-430b-bc5e-d2f5d785d692 -c cachen1p0 --l2p_dram_limit 2 00:36:41.279 [2024-07-10 12:38:50.697210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.279 [2024-07-10 12:38:50.697275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:36:41.279 [2024-07-10 12:38:50.697294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:41.279 [2024-07-10 12:38:50.697307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.279 [2024-07-10 12:38:50.697378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.279 [2024-07-10 12:38:50.697392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:41.279 [2024-07-10 12:38:50.697405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:36:41.279 [2024-07-10 12:38:50.697418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.279 [2024-07-10 12:38:50.697440] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:36:41.279 [2024-07-10 12:38:50.698678] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:36:41.279 [2024-07-10 12:38:50.698709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.279 [2024-07-10 12:38:50.698726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:41.279 [2024-07-10 12:38:50.698802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.276 ms 00:36:41.279 [2024-07-10 12:38:50.698817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.279 [2024-07-10 12:38:50.699020] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID b62d77fa-7044-4896-ab6a-6722d4d819d0 00:36:41.279 [2024-07-10 12:38:50.700864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.279 [2024-07-10 12:38:50.700896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:36:41.279 [2024-07-10 12:38:50.700914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:36:41.279 [2024-07-10 12:38:50.700924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.279 [2024-07-10 12:38:50.713968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.279 [2024-07-10 12:38:50.713998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:41.279 [2024-07-10 12:38:50.714018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.989 ms 00:36:41.279 [2024-07-10 12:38:50.714029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.279 [2024-07-10 12:38:50.714083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.279 [2024-07-10 12:38:50.714097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:41.279 [2024-07-10 12:38:50.714111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:36:41.279 [2024-07-10 12:38:50.714121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.279 [2024-07-10 12:38:50.714191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.279 [2024-07-10 12:38:50.714204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:36:41.279 [2024-07-10 12:38:50.714217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:36:41.279 [2024-07-10 12:38:50.714230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.279 [2024-07-10 12:38:50.714259] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:36:41.279 [2024-07-10 12:38:50.720758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.279 [2024-07-10 12:38:50.720800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:41.279 [2024-07-10 12:38:50.720812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.520 ms 00:36:41.279 [2024-07-10 12:38:50.720825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.279 [2024-07-10 12:38:50.720860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.279 [2024-07-10 12:38:50.720875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:36:41.279 [2024-07-10 12:38:50.720886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:41.279 [2024-07-10 12:38:50.720898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.279 [2024-07-10 12:38:50.720934] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:36:41.279 [2024-07-10 12:38:50.721068] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:36:41.279 [2024-07-10 12:38:50.721083] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:36:41.279 [2024-07-10 12:38:50.721102] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:36:41.279 [2024-07-10 12:38:50.721115] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:36:41.279 [2024-07-10 12:38:50.721130] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:36:41.279 [2024-07-10 12:38:50.721141] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:36:41.279 [2024-07-10 12:38:50.721153] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:36:41.279 [2024-07-10 12:38:50.721167] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:36:41.279 [2024-07-10 12:38:50.721179] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:36:41.279 [2024-07-10 12:38:50.721190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.279 [2024-07-10 12:38:50.721202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:36:41.279 [2024-07-10 12:38:50.721212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.258 ms 00:36:41.279 [2024-07-10 12:38:50.721225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.279 [2024-07-10 12:38:50.721296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.279 [2024-07-10 12:38:50.721308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:36:41.279 [2024-07-10 12:38:50.721319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:36:41.279 [2024-07-10 12:38:50.721331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.279 [2024-07-10 12:38:50.721418] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:36:41.279 [2024-07-10 12:38:50.721435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:36:41.279 [2024-07-10 12:38:50.721445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:41.279 [2024-07-10 12:38:50.721458] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:41.279 [2024-07-10 12:38:50.721468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:36:41.279 [2024-07-10 12:38:50.721479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:36:41.279 [2024-07-10 12:38:50.721500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:36:41.279 [2024-07-10 12:38:50.721512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:36:41.279 [2024-07-10 12:38:50.721521] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:36:41.279 [2024-07-10 12:38:50.721536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:41.279 [2024-07-10 12:38:50.721545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:36:41.279 [2024-07-10 12:38:50.721558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:36:41.279 [2024-07-10 12:38:50.721567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:41.279 [2024-07-10 12:38:50.721579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:36:41.279 [2024-07-10 12:38:50.721588] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:36:41.279 [2024-07-10 12:38:50.721599] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:41.279 [2024-07-10 12:38:50.721608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:36:41.279 [2024-07-10 12:38:50.721623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:36:41.279 [2024-07-10 12:38:50.721631] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:41.279 [2024-07-10 12:38:50.721643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:36:41.279 [2024-07-10 12:38:50.721653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:36:41.279 [2024-07-10 12:38:50.721664] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:41.279 [2024-07-10 12:38:50.721673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:36:41.279 [2024-07-10 12:38:50.721684] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:36:41.279 [2024-07-10 12:38:50.721693] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:41.279 [2024-07-10 12:38:50.721704] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:36:41.279 [2024-07-10 12:38:50.721713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:36:41.279 [2024-07-10 12:38:50.721724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:41.279 [2024-07-10 12:38:50.721744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:36:41.279 [2024-07-10 12:38:50.721756] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:36:41.279 [2024-07-10 12:38:50.721765] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:41.279 [2024-07-10 12:38:50.721777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:36:41.279 [2024-07-10 12:38:50.721786] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:36:41.279 [2024-07-10 12:38:50.721800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:41.279 [2024-07-10 12:38:50.721809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:36:41.279 [2024-07-10 12:38:50.721821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:36:41.279 [2024-07-10 12:38:50.721829] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:41.279 [2024-07-10 12:38:50.721842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:36:41.279 [2024-07-10 12:38:50.721851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:36:41.280 [2024-07-10 12:38:50.721862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:41.280 [2024-07-10 12:38:50.721872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:36:41.280 [2024-07-10 12:38:50.721885] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:36:41.280 [2024-07-10 12:38:50.721894] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:41.280 [2024-07-10 12:38:50.721905] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:36:41.280 [2024-07-10 12:38:50.721915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:36:41.280 [2024-07-10 12:38:50.721927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:41.280 [2024-07-10 12:38:50.721938] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:41.280 [2024-07-10 12:38:50.721951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:36:41.280 [2024-07-10 12:38:50.721960] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:36:41.280 [2024-07-10 12:38:50.721976] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:36:41.280 [2024-07-10 12:38:50.721986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:36:41.280 [2024-07-10 12:38:50.721997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:36:41.280 [2024-07-10 12:38:50.722006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:36:41.280 [2024-07-10 12:38:50.722022] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:36:41.280 [2024-07-10 12:38:50.722034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:41.280 [2024-07-10 12:38:50.722051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:36:41.280 [2024-07-10 12:38:50.722061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:36:41.280 [2024-07-10 12:38:50.722074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:36:41.280 [2024-07-10 12:38:50.722084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:36:41.280 [2024-07-10 12:38:50.722097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:36:41.280 [2024-07-10 12:38:50.722107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:36:41.280 [2024-07-10 12:38:50.722121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:36:41.280 [2024-07-10 12:38:50.722131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:36:41.280 [2024-07-10 12:38:50.722144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:36:41.280 [2024-07-10 12:38:50.722154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:36:41.280 [2024-07-10 12:38:50.722169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:36:41.280 [2024-07-10 12:38:50.722179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:36:41.280 [2024-07-10 12:38:50.722192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:36:41.280 [2024-07-10 12:38:50.722202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:36:41.280 [2024-07-10 12:38:50.722214] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:36:41.280 [2024-07-10 12:38:50.722225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:41.280 [2024-07-10 12:38:50.722238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:41.280 [2024-07-10 12:38:50.722249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:36:41.280 [2024-07-10 12:38:50.722263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:36:41.280 [2024-07-10 12:38:50.722274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:36:41.280 [2024-07-10 12:38:50.722287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.280 [2024-07-10 12:38:50.722297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:36:41.280 [2024-07-10 12:38:50.722309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.918 ms 00:36:41.280 [2024-07-10 12:38:50.722319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.280 [2024-07-10 12:38:50.722368] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:36:41.280 [2024-07-10 12:38:50.722380] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:36:44.562 [2024-07-10 12:38:53.814421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.562 [2024-07-10 12:38:53.814498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:36:44.562 [2024-07-10 12:38:53.814521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3097.067 ms 00:36:44.562 [2024-07-10 12:38:53.814533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.562 [2024-07-10 12:38:53.862695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.562 [2024-07-10 12:38:53.862763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:44.562 [2024-07-10 12:38:53.862784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.957 ms 00:36:44.562 [2024-07-10 12:38:53.862795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.562 [2024-07-10 12:38:53.862900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.562 [2024-07-10 12:38:53.862913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:36:44.562 [2024-07-10 12:38:53.862927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:36:44.562 [2024-07-10 12:38:53.862942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.562 [2024-07-10 12:38:53.918619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.562 [2024-07-10 12:38:53.918678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:44.562 [2024-07-10 12:38:53.918697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.716 ms 00:36:44.562 [2024-07-10 12:38:53.918707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.562 [2024-07-10 12:38:53.918764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.562 [2024-07-10 12:38:53.918779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:44.562 [2024-07-10 12:38:53.918794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:44.562 [2024-07-10 12:38:53.918804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.562 [2024-07-10 12:38:53.919297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.562 [2024-07-10 12:38:53.919318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:44.562 [2024-07-10 12:38:53.919334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.420 ms 00:36:44.562 [2024-07-10 12:38:53.919345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.563 [2024-07-10 12:38:53.919397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.563 [2024-07-10 12:38:53.919409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:44.563 [2024-07-10 12:38:53.919426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:36:44.563 [2024-07-10 12:38:53.919436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.563 [2024-07-10 12:38:53.942498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.563 [2024-07-10 12:38:53.942553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:44.563 [2024-07-10 12:38:53.942573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.072 ms 00:36:44.563 [2024-07-10 12:38:53.942584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.563 [2024-07-10 12:38:53.956359] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:36:44.563 [2024-07-10 12:38:53.957412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.563 [2024-07-10 12:38:53.957443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:36:44.563 [2024-07-10 12:38:53.957455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.750 ms 00:36:44.563 [2024-07-10 12:38:53.957468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.563 [2024-07-10 12:38:53.998164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.563 [2024-07-10 12:38:53.998239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:36:44.563 [2024-07-10 12:38:53.998256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.725 ms 00:36:44.563 [2024-07-10 12:38:53.998269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.563 [2024-07-10 12:38:53.998373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.563 [2024-07-10 12:38:53.998392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:36:44.563 [2024-07-10 12:38:53.998404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:36:44.563 [2024-07-10 12:38:53.998421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.563 [2024-07-10 12:38:54.037310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.563 [2024-07-10 12:38:54.037359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:36:44.563 [2024-07-10 12:38:54.037375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.900 ms 00:36:44.563 [2024-07-10 12:38:54.037389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.821 [2024-07-10 12:38:54.075954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.821 [2024-07-10 12:38:54.076006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:36:44.821 [2024-07-10 12:38:54.076022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.580 ms 00:36:44.821 [2024-07-10 12:38:54.076036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.821 [2024-07-10 12:38:54.076865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.821 [2024-07-10 12:38:54.076897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:36:44.821 [2024-07-10 12:38:54.076910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.788 ms 00:36:44.821 [2024-07-10 12:38:54.076926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.821 [2024-07-10 12:38:54.181092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.821 [2024-07-10 12:38:54.181175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:36:44.821 [2024-07-10 12:38:54.181194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 104.276 ms 00:36:44.821 [2024-07-10 12:38:54.181212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.821 [2024-07-10 12:38:54.219166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.821 [2024-07-10 12:38:54.219234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:36:44.821 [2024-07-10 12:38:54.219250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.965 ms 00:36:44.821 [2024-07-10 12:38:54.219264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.821 [2024-07-10 12:38:54.256252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.821 [2024-07-10 12:38:54.256318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:36:44.821 [2024-07-10 12:38:54.256348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.000 ms 00:36:44.821 [2024-07-10 12:38:54.256361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.821 [2024-07-10 12:38:54.293448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.821 [2024-07-10 12:38:54.293508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:36:44.821 [2024-07-10 12:38:54.293526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.095 ms 00:36:44.821 [2024-07-10 12:38:54.293540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.821 [2024-07-10 12:38:54.293602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.821 [2024-07-10 12:38:54.293617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:36:44.821 [2024-07-10 12:38:54.293628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:36:44.821 [2024-07-10 12:38:54.293645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.821 [2024-07-10 12:38:54.293766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:44.821 [2024-07-10 12:38:54.293783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:36:44.821 [2024-07-10 12:38:54.293799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:36:44.821 [2024-07-10 12:38:54.293812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:44.821 [2024-07-10 12:38:54.294959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3603.094 ms, result 0 00:36:44.821 { 00:36:44.821 "name": "ftl", 00:36:44.821 "uuid": "b62d77fa-7044-4896-ab6a-6722d4d819d0" 00:36:44.821 } 00:36:45.079 12:38:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:36:45.079 [2024-07-10 12:38:54.505676] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:45.079 12:38:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:36:45.337 12:38:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:36:45.597 [2024-07-10 12:38:54.857465] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:36:45.597 12:38:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:36:45.597 [2024-07-10 12:38:55.055527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:45.597 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:36:46.163 Fill FTL, iteration 1 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=86110 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 86110 /var/tmp/spdk.tgt.sock 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86110 ']' 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:46.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:46.163 12:38:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:46.163 [2024-07-10 12:38:55.510189] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:36:46.163 [2024-07-10 12:38:55.510369] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86110 ] 00:36:46.421 [2024-07-10 12:38:55.679711] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.697 [2024-07-10 12:38:55.975413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:47.631 12:38:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:47.631 12:38:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:36:47.631 12:38:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:36:47.890 ftln1 00:36:47.890 12:38:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:36:47.890 12:38:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:36:48.150 12:38:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:36:48.150 12:38:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 86110 00:36:48.150 12:38:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86110 ']' 00:36:48.150 12:38:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86110 00:36:48.150 12:38:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:36:48.150 12:38:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:48.150 12:38:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86110 00:36:48.150 killing process with pid 86110 00:36:48.150 12:38:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:48.150 12:38:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:48.150 12:38:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86110' 00:36:48.150 12:38:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86110 00:36:48.150 12:38:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86110 00:36:50.701 12:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:36:50.701 12:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:36:50.957 [2024-07-10 12:39:00.249280] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:36:50.957 [2024-07-10 12:39:00.249413] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86175 ] 00:36:50.957 [2024-07-10 12:39:00.419228] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.534 [2024-07-10 12:39:00.719523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:57.523  Copying: 250/1024 [MB] (250 MBps) Copying: 505/1024 [MB] (255 MBps) Copying: 761/1024 [MB] (256 MBps) Copying: 1014/1024 [MB] (253 MBps) Copying: 1024/1024 [MB] (average 253 MBps) 00:36:57.523 00:36:57.523 Calculate MD5 checksum, iteration 1 00:36:57.523 12:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:36:57.523 12:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:36:57.523 12:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:57.523 12:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:57.523 12:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:57.523 12:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:57.523 12:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:57.523 12:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:57.523 [2024-07-10 12:39:06.854275] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:36:57.523 [2024-07-10 12:39:06.854558] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86239 ] 00:36:57.781 [2024-07-10 12:39:07.026274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.040 [2024-07-10 12:39:07.274187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:01.368  Copying: 680/1024 [MB] (680 MBps) Copying: 1024/1024 [MB] (average 662 MBps) 00:37:01.368 00:37:01.368 12:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:37:01.368 12:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:03.269 12:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:37:03.269 Fill FTL, iteration 2 00:37:03.269 12:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=17d153b8afec25cdc0edf85a3efefdcf 00:37:03.269 12:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:37:03.269 12:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:37:03.269 12:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:37:03.269 12:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:37:03.269 12:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:03.269 12:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:03.269 12:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:03.269 12:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:03.269 12:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:37:03.269 [2024-07-10 12:39:12.356855] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:37:03.269 [2024-07-10 12:39:12.357238] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86306 ] 00:37:03.269 [2024-07-10 12:39:12.528345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.527 [2024-07-10 12:39:12.805836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:09.525  Copying: 254/1024 [MB] (254 MBps) Copying: 493/1024 [MB] (239 MBps) Copying: 731/1024 [MB] (238 MBps) Copying: 967/1024 [MB] (236 MBps) Copying: 1024/1024 [MB] (average 240 MBps) 00:37:09.525 00:37:09.784 12:39:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:37:09.784 Calculate MD5 checksum, iteration 2 00:37:09.784 12:39:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:37:09.784 12:39:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:09.784 12:39:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:09.784 12:39:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:09.784 12:39:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:09.784 12:39:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:09.784 12:39:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:09.784 [2024-07-10 12:39:19.097240] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:37:09.784 [2024-07-10 12:39:19.097570] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86372 ] 00:37:10.042 [2024-07-10 12:39:19.269038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.304 [2024-07-10 12:39:19.546410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.390  Copying: 687/1024 [MB] (687 MBps) Copying: 1024/1024 [MB] (average 647 MBps) 00:37:14.390 00:37:14.390 12:39:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:37:14.390 12:39:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:16.294 12:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:37:16.294 12:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ef7a1f9146c9f138b6e8816b9015cbbd 00:37:16.294 12:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:37:16.294 12:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:37:16.294 12:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:16.294 [2024-07-10 12:39:25.571995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.294 [2024-07-10 12:39:25.572062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:16.294 [2024-07-10 12:39:25.572082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:16.294 [2024-07-10 12:39:25.572093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.294 [2024-07-10 12:39:25.572136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.294 [2024-07-10 12:39:25.572148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:16.294 [2024-07-10 12:39:25.572159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:16.294 [2024-07-10 12:39:25.572177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.294 [2024-07-10 12:39:25.572198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.294 [2024-07-10 12:39:25.572210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:16.294 [2024-07-10 12:39:25.572232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:16.294 [2024-07-10 12:39:25.572242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.294 [2024-07-10 12:39:25.572341] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.308 ms, result 0 00:37:16.294 true 00:37:16.295 12:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:16.295 { 00:37:16.295 "name": "ftl", 00:37:16.295 "properties": [ 00:37:16.295 { 00:37:16.295 "name": "superblock_version", 00:37:16.295 "value": 5, 00:37:16.295 "read-only": true 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "name": "base_device", 00:37:16.295 "bands": [ 00:37:16.295 { 00:37:16.295 "id": 0, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 1, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 2, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 3, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 4, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 5, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 6, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 7, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 8, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 9, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 10, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 11, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 12, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 13, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 14, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 15, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 16, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 17, 00:37:16.295 "state": "FREE", 00:37:16.295 "validity": 0.0 00:37:16.295 } 00:37:16.295 ], 00:37:16.295 "read-only": true 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "name": "cache_device", 00:37:16.295 "type": "bdev", 00:37:16.295 "chunks": [ 00:37:16.295 { 00:37:16.295 "id": 0, 00:37:16.295 "state": "INACTIVE", 00:37:16.295 "utilization": 0.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 1, 00:37:16.295 "state": "CLOSED", 00:37:16.295 "utilization": 1.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 2, 00:37:16.295 "state": "CLOSED", 00:37:16.295 "utilization": 1.0 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 3, 00:37:16.295 "state": "OPEN", 00:37:16.295 "utilization": 0.001953125 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "id": 4, 00:37:16.295 "state": "OPEN", 00:37:16.295 "utilization": 0.0 00:37:16.295 } 00:37:16.295 ], 00:37:16.295 "read-only": true 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "name": "verbose_mode", 00:37:16.295 "value": true, 00:37:16.295 "unit": "", 00:37:16.295 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:16.295 }, 00:37:16.295 { 00:37:16.295 "name": "prep_upgrade_on_shutdown", 00:37:16.295 "value": false, 00:37:16.295 "unit": "", 00:37:16.295 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:16.295 } 00:37:16.295 ] 00:37:16.295 } 00:37:16.555 12:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:37:16.555 [2024-07-10 12:39:25.951894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.555 [2024-07-10 12:39:25.951953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:16.555 [2024-07-10 12:39:25.951970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:37:16.555 [2024-07-10 12:39:25.951980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.555 [2024-07-10 12:39:25.952038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.555 [2024-07-10 12:39:25.952050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:16.555 [2024-07-10 12:39:25.952062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:16.555 [2024-07-10 12:39:25.952072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.555 [2024-07-10 12:39:25.952092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.555 [2024-07-10 12:39:25.952103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:16.555 [2024-07-10 12:39:25.952113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:16.555 [2024-07-10 12:39:25.952130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.555 [2024-07-10 12:39:25.952195] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.293 ms, result 0 00:37:16.555 true 00:37:16.555 12:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:37:16.555 12:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:16.555 12:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:37:16.815 12:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:37:16.815 12:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:37:16.815 12:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:17.074 [2024-07-10 12:39:26.331598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:17.074 [2024-07-10 12:39:26.331672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:17.074 [2024-07-10 12:39:26.331689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:17.074 [2024-07-10 12:39:26.331699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:17.074 [2024-07-10 12:39:26.331792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:17.074 [2024-07-10 12:39:26.331807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:17.074 [2024-07-10 12:39:26.331819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:17.074 [2024-07-10 12:39:26.331829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:17.074 [2024-07-10 12:39:26.331851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:17.074 [2024-07-10 12:39:26.331862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:17.074 [2024-07-10 12:39:26.331873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:17.074 [2024-07-10 12:39:26.331883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:17.074 [2024-07-10 12:39:26.331949] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.346 ms, result 0 00:37:17.074 true 00:37:17.074 12:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:17.074 { 00:37:17.074 "name": "ftl", 00:37:17.074 "properties": [ 00:37:17.074 { 00:37:17.075 "name": "superblock_version", 00:37:17.075 "value": 5, 00:37:17.075 "read-only": true 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "name": "base_device", 00:37:17.075 "bands": [ 00:37:17.075 { 00:37:17.075 "id": 0, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 1, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 2, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 3, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 4, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 5, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 6, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 7, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 8, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 9, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 10, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 11, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 12, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 13, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 14, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 15, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 16, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 17, 00:37:17.075 "state": "FREE", 00:37:17.075 "validity": 0.0 00:37:17.075 } 00:37:17.075 ], 00:37:17.075 "read-only": true 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "name": "cache_device", 00:37:17.075 "type": "bdev", 00:37:17.075 "chunks": [ 00:37:17.075 { 00:37:17.075 "id": 0, 00:37:17.075 "state": "INACTIVE", 00:37:17.075 "utilization": 0.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 1, 00:37:17.075 "state": "CLOSED", 00:37:17.075 "utilization": 1.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 2, 00:37:17.075 "state": "CLOSED", 00:37:17.075 "utilization": 1.0 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 3, 00:37:17.075 "state": "OPEN", 00:37:17.075 "utilization": 0.001953125 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "id": 4, 00:37:17.075 "state": "OPEN", 00:37:17.075 "utilization": 0.0 00:37:17.075 } 00:37:17.075 ], 00:37:17.075 "read-only": true 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "name": "verbose_mode", 00:37:17.075 "value": true, 00:37:17.075 "unit": "", 00:37:17.075 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:17.075 }, 00:37:17.075 { 00:37:17.075 "name": "prep_upgrade_on_shutdown", 00:37:17.075 "value": true, 00:37:17.075 "unit": "", 00:37:17.075 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:17.075 } 00:37:17.075 ] 00:37:17.075 } 00:37:17.075 12:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:37:17.075 12:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85992 ]] 00:37:17.075 12:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85992 00:37:17.075 12:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 85992 ']' 00:37:17.335 12:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 85992 00:37:17.335 12:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:37:17.335 12:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:17.335 12:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85992 00:37:17.335 killing process with pid 85992 00:37:17.335 12:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:17.335 12:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:17.335 12:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85992' 00:37:17.335 12:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 85992 00:37:17.335 12:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 85992 00:37:18.272 [2024-07-10 12:39:27.717773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:37:18.272 [2024-07-10 12:39:27.737230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:18.272 [2024-07-10 12:39:27.737292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:37:18.272 [2024-07-10 12:39:27.737309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:18.272 [2024-07-10 12:39:27.737320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:18.272 [2024-07-10 12:39:27.737344] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:37:18.272 [2024-07-10 12:39:27.741781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:18.272 [2024-07-10 12:39:27.741814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:37:18.272 [2024-07-10 12:39:27.741828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.427 ms 00:37:18.272 [2024-07-10 12:39:27.741845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.428 [2024-07-10 12:39:35.061426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:26.428 [2024-07-10 12:39:35.061522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:37:26.428 [2024-07-10 12:39:35.061541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7331.425 ms 00:37:26.428 [2024-07-10 12:39:35.061552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.428 [2024-07-10 12:39:35.062778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:26.428 [2024-07-10 12:39:35.062811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:37:26.428 [2024-07-10 12:39:35.062831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.202 ms 00:37:26.428 [2024-07-10 12:39:35.062842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.428 [2024-07-10 12:39:35.063771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:26.428 [2024-07-10 12:39:35.063792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:37:26.428 [2024-07-10 12:39:35.063805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.896 ms 00:37:26.428 [2024-07-10 12:39:35.063816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.428 [2024-07-10 12:39:35.079275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:26.428 [2024-07-10 12:39:35.079336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:37:26.428 [2024-07-10 12:39:35.079352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.427 ms 00:37:26.428 [2024-07-10 12:39:35.079363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.428 [2024-07-10 12:39:35.089055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:26.429 [2024-07-10 12:39:35.089116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:37:26.429 [2024-07-10 12:39:35.089133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.660 ms 00:37:26.429 [2024-07-10 12:39:35.089155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.089256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:26.429 [2024-07-10 12:39:35.089275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:37:26.429 [2024-07-10 12:39:35.089287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:37:26.429 [2024-07-10 12:39:35.089298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.104276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:26.429 [2024-07-10 12:39:35.104323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:37:26.429 [2024-07-10 12:39:35.104340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.982 ms 00:37:26.429 [2024-07-10 12:39:35.104351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.119203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:26.429 [2024-07-10 12:39:35.119245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:37:26.429 [2024-07-10 12:39:35.119260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.835 ms 00:37:26.429 [2024-07-10 12:39:35.119270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.135076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:26.429 [2024-07-10 12:39:35.135122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:37:26.429 [2024-07-10 12:39:35.135136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.792 ms 00:37:26.429 [2024-07-10 12:39:35.135146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.151747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:26.429 [2024-07-10 12:39:35.151785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:37:26.429 [2024-07-10 12:39:35.151799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.532 ms 00:37:26.429 [2024-07-10 12:39:35.151810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.151846] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:37:26.429 [2024-07-10 12:39:35.151874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:37:26.429 [2024-07-10 12:39:35.151889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:37:26.429 [2024-07-10 12:39:35.151901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:37:26.429 [2024-07-10 12:39:35.151913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.151926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.151937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.151949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.151960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.151971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.151981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.151992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.152002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.152013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.152024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.152035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.152059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.152071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.152082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:26.429 [2024-07-10 12:39:35.152097] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:37:26.429 [2024-07-10 12:39:35.152109] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b62d77fa-7044-4896-ab6a-6722d4d819d0 00:37:26.429 [2024-07-10 12:39:35.152120] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:37:26.429 [2024-07-10 12:39:35.152137] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:37:26.429 [2024-07-10 12:39:35.152147] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:37:26.429 [2024-07-10 12:39:35.152158] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:37:26.429 [2024-07-10 12:39:35.152169] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:37:26.429 [2024-07-10 12:39:35.152180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:37:26.429 [2024-07-10 12:39:35.152190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:37:26.429 [2024-07-10 12:39:35.152200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:37:26.429 [2024-07-10 12:39:35.152210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:37:26.429 [2024-07-10 12:39:35.152220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:26.429 [2024-07-10 12:39:35.152231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:37:26.429 [2024-07-10 12:39:35.152246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.376 ms 00:37:26.429 [2024-07-10 12:39:35.152256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.174121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:26.429 [2024-07-10 12:39:35.174156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:37:26.429 [2024-07-10 12:39:35.174171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.868 ms 00:37:26.429 [2024-07-10 12:39:35.174181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.174698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:26.429 [2024-07-10 12:39:35.174716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:37:26.429 [2024-07-10 12:39:35.174741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.489 ms 00:37:26.429 [2024-07-10 12:39:35.174753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.239358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:26.429 [2024-07-10 12:39:35.239435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:26.429 [2024-07-10 12:39:35.239452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:26.429 [2024-07-10 12:39:35.239464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.239525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:26.429 [2024-07-10 12:39:35.239544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:26.429 [2024-07-10 12:39:35.239554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:26.429 [2024-07-10 12:39:35.239564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.239674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:26.429 [2024-07-10 12:39:35.239689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:26.429 [2024-07-10 12:39:35.239700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:26.429 [2024-07-10 12:39:35.239711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.239765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:26.429 [2024-07-10 12:39:35.239778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:26.429 [2024-07-10 12:39:35.239794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:26.429 [2024-07-10 12:39:35.239805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.366439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:26.429 [2024-07-10 12:39:35.366507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:26.429 [2024-07-10 12:39:35.366524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:26.429 [2024-07-10 12:39:35.366535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.468369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:26.429 [2024-07-10 12:39:35.468457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:26.429 [2024-07-10 12:39:35.468473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:26.429 [2024-07-10 12:39:35.468485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.468592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:26.429 [2024-07-10 12:39:35.468604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:26.429 [2024-07-10 12:39:35.468615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:26.429 [2024-07-10 12:39:35.468625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.468671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:26.429 [2024-07-10 12:39:35.468682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:26.429 [2024-07-10 12:39:35.468704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:26.429 [2024-07-10 12:39:35.468719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.468881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:26.429 [2024-07-10 12:39:35.468897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:26.429 [2024-07-10 12:39:35.468908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:26.429 [2024-07-10 12:39:35.468918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.468956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:26.429 [2024-07-10 12:39:35.468968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:37:26.429 [2024-07-10 12:39:35.468978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:26.429 [2024-07-10 12:39:35.468988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.469040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:26.429 [2024-07-10 12:39:35.469052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:26.429 [2024-07-10 12:39:35.469063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:26.429 [2024-07-10 12:39:35.469072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.469123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:26.429 [2024-07-10 12:39:35.469135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:26.429 [2024-07-10 12:39:35.469145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:26.429 [2024-07-10 12:39:35.469159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:26.429 [2024-07-10 12:39:35.469285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7744.582 ms, result 0 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86576 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86576 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86576 ']' 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:29.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:29.737 12:39:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:29.737 [2024-07-10 12:39:38.942214] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:37:29.738 [2024-07-10 12:39:38.942588] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86576 ] 00:37:29.738 [2024-07-10 12:39:39.110123] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.033 [2024-07-10 12:39:39.365925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:30.971 [2024-07-10 12:39:40.408272] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:30.971 [2024-07-10 12:39:40.408608] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:31.232 [2024-07-10 12:39:40.557025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:31.232 [2024-07-10 12:39:40.557218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:37:31.232 [2024-07-10 12:39:40.557333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:37:31.232 [2024-07-10 12:39:40.557372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:31.232 [2024-07-10 12:39:40.557468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:31.232 [2024-07-10 12:39:40.557507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:31.232 [2024-07-10 12:39:40.557540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:37:31.232 [2024-07-10 12:39:40.557636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:31.232 [2024-07-10 12:39:40.557698] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:37:31.232 [2024-07-10 12:39:40.558858] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:37:31.232 [2024-07-10 12:39:40.558986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:31.232 [2024-07-10 12:39:40.559001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:31.232 [2024-07-10 12:39:40.559014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.295 ms 00:37:31.232 [2024-07-10 12:39:40.559023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:31.232 [2024-07-10 12:39:40.560496] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:37:31.232 [2024-07-10 12:39:40.580236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:31.232 [2024-07-10 12:39:40.580271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:37:31.232 [2024-07-10 12:39:40.580285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.772 ms 00:37:31.232 [2024-07-10 12:39:40.580312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:31.232 [2024-07-10 12:39:40.580378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:31.232 [2024-07-10 12:39:40.580391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:37:31.232 [2024-07-10 12:39:40.580402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:37:31.232 [2024-07-10 12:39:40.580412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:31.232 [2024-07-10 12:39:40.587245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:31.232 [2024-07-10 12:39:40.587272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:31.232 [2024-07-10 12:39:40.587284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.766 ms 00:37:31.232 [2024-07-10 12:39:40.587310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:31.232 [2024-07-10 12:39:40.587373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:31.232 [2024-07-10 12:39:40.587388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:31.232 [2024-07-10 12:39:40.587399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:37:31.232 [2024-07-10 12:39:40.587412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:31.232 [2024-07-10 12:39:40.587456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:31.232 [2024-07-10 12:39:40.587469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:37:31.232 [2024-07-10 12:39:40.587480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:37:31.232 [2024-07-10 12:39:40.587498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:31.232 [2024-07-10 12:39:40.587525] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:37:31.232 [2024-07-10 12:39:40.592953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:31.232 [2024-07-10 12:39:40.592984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:31.232 [2024-07-10 12:39:40.592997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.442 ms 00:37:31.232 [2024-07-10 12:39:40.593007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:31.232 [2024-07-10 12:39:40.593037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:31.232 [2024-07-10 12:39:40.593049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:37:31.232 [2024-07-10 12:39:40.593059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:31.232 [2024-07-10 12:39:40.593073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:31.232 [2024-07-10 12:39:40.593125] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:37:31.232 [2024-07-10 12:39:40.593151] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:37:31.232 [2024-07-10 12:39:40.593185] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:37:31.232 [2024-07-10 12:39:40.593203] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:37:31.232 [2024-07-10 12:39:40.593296] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:37:31.232 [2024-07-10 12:39:40.593310] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:37:31.232 [2024-07-10 12:39:40.593326] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:37:31.232 [2024-07-10 12:39:40.593339] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:37:31.232 [2024-07-10 12:39:40.593350] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:37:31.232 [2024-07-10 12:39:40.593362] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:37:31.232 [2024-07-10 12:39:40.593372] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:37:31.232 [2024-07-10 12:39:40.593382] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:37:31.232 [2024-07-10 12:39:40.593392] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:37:31.232 [2024-07-10 12:39:40.593402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:31.232 [2024-07-10 12:39:40.593412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:37:31.232 [2024-07-10 12:39:40.593423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.281 ms 00:37:31.232 [2024-07-10 12:39:40.593433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:31.232 [2024-07-10 12:39:40.593516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:31.232 [2024-07-10 12:39:40.593527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:37:31.232 [2024-07-10 12:39:40.593537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:37:31.232 [2024-07-10 12:39:40.593551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:31.232 [2024-07-10 12:39:40.593639] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:37:31.232 [2024-07-10 12:39:40.593652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:37:31.232 [2024-07-10 12:39:40.593662] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:31.232 [2024-07-10 12:39:40.593673] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:31.232 [2024-07-10 12:39:40.593683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:37:31.232 [2024-07-10 12:39:40.593692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:37:31.232 [2024-07-10 12:39:40.593702] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:37:31.232 [2024-07-10 12:39:40.593711] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:37:31.232 [2024-07-10 12:39:40.593723] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:37:31.233 [2024-07-10 12:39:40.593749] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:31.233 [2024-07-10 12:39:40.593759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:37:31.233 [2024-07-10 12:39:40.593771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:37:31.233 [2024-07-10 12:39:40.593781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:31.233 [2024-07-10 12:39:40.593790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:37:31.233 [2024-07-10 12:39:40.593800] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:37:31.233 [2024-07-10 12:39:40.593810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:31.233 [2024-07-10 12:39:40.593820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:37:31.233 [2024-07-10 12:39:40.593829] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:37:31.233 [2024-07-10 12:39:40.593839] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:31.233 [2024-07-10 12:39:40.593848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:37:31.233 [2024-07-10 12:39:40.593877] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:37:31.233 [2024-07-10 12:39:40.593886] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:31.233 [2024-07-10 12:39:40.593895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:37:31.233 [2024-07-10 12:39:40.593904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:37:31.233 [2024-07-10 12:39:40.593914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:31.233 [2024-07-10 12:39:40.593923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:37:31.233 [2024-07-10 12:39:40.593932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:37:31.233 [2024-07-10 12:39:40.593941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:31.233 [2024-07-10 12:39:40.593951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:37:31.233 [2024-07-10 12:39:40.593960] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:37:31.233 [2024-07-10 12:39:40.593969] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:31.233 [2024-07-10 12:39:40.593978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:37:31.233 [2024-07-10 12:39:40.593987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:37:31.233 [2024-07-10 12:39:40.593996] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:31.233 [2024-07-10 12:39:40.594006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:37:31.233 [2024-07-10 12:39:40.594014] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:37:31.233 [2024-07-10 12:39:40.594027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:31.233 [2024-07-10 12:39:40.594036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:37:31.233 [2024-07-10 12:39:40.594045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:37:31.233 [2024-07-10 12:39:40.594054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:31.233 [2024-07-10 12:39:40.594064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:37:31.233 [2024-07-10 12:39:40.594072] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:37:31.233 [2024-07-10 12:39:40.594081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:31.233 [2024-07-10 12:39:40.594090] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:37:31.233 [2024-07-10 12:39:40.594105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:37:31.233 [2024-07-10 12:39:40.594115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:31.233 [2024-07-10 12:39:40.594124] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:31.233 [2024-07-10 12:39:40.594134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:37:31.233 [2024-07-10 12:39:40.594144] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:37:31.233 [2024-07-10 12:39:40.594153] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:37:31.233 [2024-07-10 12:39:40.594162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:37:31.233 [2024-07-10 12:39:40.594181] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:37:31.233 [2024-07-10 12:39:40.594190] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:37:31.233 [2024-07-10 12:39:40.594201] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:37:31.233 [2024-07-10 12:39:40.594213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:31.233 [2024-07-10 12:39:40.594224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:37:31.233 [2024-07-10 12:39:40.594235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:37:31.233 [2024-07-10 12:39:40.594245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:37:31.233 [2024-07-10 12:39:40.594256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:37:31.233 [2024-07-10 12:39:40.594267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:37:31.233 [2024-07-10 12:39:40.594277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:37:31.233 [2024-07-10 12:39:40.594287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:37:31.233 [2024-07-10 12:39:40.594298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:37:31.233 [2024-07-10 12:39:40.594308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:37:31.233 [2024-07-10 12:39:40.594318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:37:31.233 [2024-07-10 12:39:40.594328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:37:31.233 [2024-07-10 12:39:40.594338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:37:31.233 [2024-07-10 12:39:40.594348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:37:31.233 [2024-07-10 12:39:40.594358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:37:31.233 [2024-07-10 12:39:40.594368] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:37:31.233 [2024-07-10 12:39:40.594380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:31.233 [2024-07-10 12:39:40.594391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:31.233 [2024-07-10 12:39:40.594401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:37:31.233 [2024-07-10 12:39:40.594412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:37:31.233 [2024-07-10 12:39:40.594422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:37:31.233 [2024-07-10 12:39:40.594436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:31.233 [2024-07-10 12:39:40.594447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:37:31.233 [2024-07-10 12:39:40.594456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.849 ms 00:37:31.233 [2024-07-10 12:39:40.594470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:31.233 [2024-07-10 12:39:40.594516] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:37:31.233 [2024-07-10 12:39:40.594529] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:37:34.521 [2024-07-10 12:39:43.932497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.521 [2024-07-10 12:39:43.932565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:37:34.521 [2024-07-10 12:39:43.932584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3343.396 ms 00:37:34.521 [2024-07-10 12:39:43.932595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.521 [2024-07-10 12:39:43.972329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.521 [2024-07-10 12:39:43.972380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:34.521 [2024-07-10 12:39:43.972396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.484 ms 00:37:34.521 [2024-07-10 12:39:43.972412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.521 [2024-07-10 12:39:43.972516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.522 [2024-07-10 12:39:43.972529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:37:34.522 [2024-07-10 12:39:43.972541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:37:34.522 [2024-07-10 12:39:43.972552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.781 [2024-07-10 12:39:44.023911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.781 [2024-07-10 12:39:44.023965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:34.781 [2024-07-10 12:39:44.023980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 51.396 ms 00:37:34.781 [2024-07-10 12:39:44.023992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.781 [2024-07-10 12:39:44.024037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.781 [2024-07-10 12:39:44.024048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:34.781 [2024-07-10 12:39:44.024059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:34.781 [2024-07-10 12:39:44.024069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.781 [2024-07-10 12:39:44.024557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.781 [2024-07-10 12:39:44.024571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:34.781 [2024-07-10 12:39:44.024586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.417 ms 00:37:34.781 [2024-07-10 12:39:44.024596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.781 [2024-07-10 12:39:44.024640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.781 [2024-07-10 12:39:44.024652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:34.781 [2024-07-10 12:39:44.024663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:37:34.781 [2024-07-10 12:39:44.024673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.781 [2024-07-10 12:39:44.046109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.781 [2024-07-10 12:39:44.046152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:34.781 [2024-07-10 12:39:44.046167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.447 ms 00:37:34.781 [2024-07-10 12:39:44.046193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.781 [2024-07-10 12:39:44.066414] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:37:34.781 [2024-07-10 12:39:44.066457] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:37:34.781 [2024-07-10 12:39:44.066473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.781 [2024-07-10 12:39:44.066500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:37:34.781 [2024-07-10 12:39:44.066512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.193 ms 00:37:34.781 [2024-07-10 12:39:44.066522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.781 [2024-07-10 12:39:44.086825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.781 [2024-07-10 12:39:44.086867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:37:34.781 [2024-07-10 12:39:44.086881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.291 ms 00:37:34.781 [2024-07-10 12:39:44.086907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.781 [2024-07-10 12:39:44.106052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.781 [2024-07-10 12:39:44.106086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:37:34.781 [2024-07-10 12:39:44.106099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.128 ms 00:37:34.781 [2024-07-10 12:39:44.106109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.781 [2024-07-10 12:39:44.125489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.781 [2024-07-10 12:39:44.125522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:37:34.781 [2024-07-10 12:39:44.125534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.371 ms 00:37:34.781 [2024-07-10 12:39:44.125545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.781 [2024-07-10 12:39:44.126334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.781 [2024-07-10 12:39:44.126361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:37:34.781 [2024-07-10 12:39:44.126373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.691 ms 00:37:34.781 [2024-07-10 12:39:44.126383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.781 [2024-07-10 12:39:44.224140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.781 [2024-07-10 12:39:44.224232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:37:34.781 [2024-07-10 12:39:44.224250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 97.883 ms 00:37:34.781 [2024-07-10 12:39:44.224261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.781 [2024-07-10 12:39:44.238057] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:37:34.781 [2024-07-10 12:39:44.239119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.781 [2024-07-10 12:39:44.239140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:37:34.781 [2024-07-10 12:39:44.239155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.799 ms 00:37:34.781 [2024-07-10 12:39:44.239171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.781 [2024-07-10 12:39:44.239281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.781 [2024-07-10 12:39:44.239293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:37:34.782 [2024-07-10 12:39:44.239305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:37:34.782 [2024-07-10 12:39:44.239315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.782 [2024-07-10 12:39:44.239377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.782 [2024-07-10 12:39:44.239389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:37:34.782 [2024-07-10 12:39:44.239400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:37:34.782 [2024-07-10 12:39:44.239410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.782 [2024-07-10 12:39:44.239436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.782 [2024-07-10 12:39:44.239447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:37:34.782 [2024-07-10 12:39:44.239458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:34.782 [2024-07-10 12:39:44.239467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:34.782 [2024-07-10 12:39:44.239502] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:37:34.782 [2024-07-10 12:39:44.239515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:34.782 [2024-07-10 12:39:44.239524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:37:34.782 [2024-07-10 12:39:44.239535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:37:34.782 [2024-07-10 12:39:44.239544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.041 [2024-07-10 12:39:44.278046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.041 [2024-07-10 12:39:44.278112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:37:35.041 [2024-07-10 12:39:44.278128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.542 ms 00:37:35.041 [2024-07-10 12:39:44.278155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.041 [2024-07-10 12:39:44.278236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.041 [2024-07-10 12:39:44.278249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:37:35.041 [2024-07-10 12:39:44.278260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:37:35.041 [2024-07-10 12:39:44.278271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.041 [2024-07-10 12:39:44.279677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3728.234 ms, result 0 00:37:35.041 [2024-07-10 12:39:44.294428] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:35.041 [2024-07-10 12:39:44.310405] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:37:35.041 [2024-07-10 12:39:44.320492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:35.610 12:39:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:35.610 12:39:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:37:35.610 12:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:35.610 12:39:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:37:35.610 12:39:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:35.610 [2024-07-10 12:39:45.060328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.610 [2024-07-10 12:39:45.060386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:35.610 [2024-07-10 12:39:45.060403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:35.610 [2024-07-10 12:39:45.060414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.610 [2024-07-10 12:39:45.060441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.610 [2024-07-10 12:39:45.060457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:35.610 [2024-07-10 12:39:45.060469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:35.610 [2024-07-10 12:39:45.060479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.610 [2024-07-10 12:39:45.060500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.610 [2024-07-10 12:39:45.060511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:35.610 [2024-07-10 12:39:45.060522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:35.610 [2024-07-10 12:39:45.060532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.610 [2024-07-10 12:39:45.060597] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.259 ms, result 0 00:37:35.610 true 00:37:35.610 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:35.869 { 00:37:35.869 "name": "ftl", 00:37:35.869 "properties": [ 00:37:35.869 { 00:37:35.869 "name": "superblock_version", 00:37:35.869 "value": 5, 00:37:35.869 "read-only": true 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "name": "base_device", 00:37:35.869 "bands": [ 00:37:35.869 { 00:37:35.869 "id": 0, 00:37:35.869 "state": "CLOSED", 00:37:35.869 "validity": 1.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 1, 00:37:35.869 "state": "CLOSED", 00:37:35.869 "validity": 1.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 2, 00:37:35.869 "state": "CLOSED", 00:37:35.869 "validity": 0.007843137254901933 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 3, 00:37:35.869 "state": "FREE", 00:37:35.869 "validity": 0.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 4, 00:37:35.869 "state": "FREE", 00:37:35.869 "validity": 0.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 5, 00:37:35.869 "state": "FREE", 00:37:35.869 "validity": 0.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 6, 00:37:35.869 "state": "FREE", 00:37:35.869 "validity": 0.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 7, 00:37:35.869 "state": "FREE", 00:37:35.869 "validity": 0.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 8, 00:37:35.869 "state": "FREE", 00:37:35.869 "validity": 0.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 9, 00:37:35.869 "state": "FREE", 00:37:35.869 "validity": 0.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 10, 00:37:35.869 "state": "FREE", 00:37:35.869 "validity": 0.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 11, 00:37:35.869 "state": "FREE", 00:37:35.869 "validity": 0.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 12, 00:37:35.869 "state": "FREE", 00:37:35.869 "validity": 0.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 13, 00:37:35.869 "state": "FREE", 00:37:35.869 "validity": 0.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 14, 00:37:35.869 "state": "FREE", 00:37:35.869 "validity": 0.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 15, 00:37:35.869 "state": "FREE", 00:37:35.869 "validity": 0.0 00:37:35.869 }, 00:37:35.869 { 00:37:35.869 "id": 16, 00:37:35.869 "state": "FREE", 00:37:35.869 "validity": 0.0 00:37:35.870 }, 00:37:35.870 { 00:37:35.870 "id": 17, 00:37:35.870 "state": "FREE", 00:37:35.870 "validity": 0.0 00:37:35.870 } 00:37:35.870 ], 00:37:35.870 "read-only": true 00:37:35.870 }, 00:37:35.870 { 00:37:35.870 "name": "cache_device", 00:37:35.870 "type": "bdev", 00:37:35.870 "chunks": [ 00:37:35.870 { 00:37:35.870 "id": 0, 00:37:35.870 "state": "INACTIVE", 00:37:35.870 "utilization": 0.0 00:37:35.870 }, 00:37:35.870 { 00:37:35.870 "id": 1, 00:37:35.870 "state": "OPEN", 00:37:35.870 "utilization": 0.0 00:37:35.870 }, 00:37:35.870 { 00:37:35.870 "id": 2, 00:37:35.870 "state": "OPEN", 00:37:35.870 "utilization": 0.0 00:37:35.870 }, 00:37:35.870 { 00:37:35.870 "id": 3, 00:37:35.870 "state": "FREE", 00:37:35.870 "utilization": 0.0 00:37:35.870 }, 00:37:35.870 { 00:37:35.870 "id": 4, 00:37:35.870 "state": "FREE", 00:37:35.870 "utilization": 0.0 00:37:35.870 } 00:37:35.870 ], 00:37:35.870 "read-only": true 00:37:35.870 }, 00:37:35.870 { 00:37:35.870 "name": "verbose_mode", 00:37:35.870 "value": true, 00:37:35.870 "unit": "", 00:37:35.870 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:35.870 }, 00:37:35.870 { 00:37:35.870 "name": "prep_upgrade_on_shutdown", 00:37:35.870 "value": false, 00:37:35.870 "unit": "", 00:37:35.870 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:35.870 } 00:37:35.870 ] 00:37:35.870 } 00:37:35.870 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:37:35.870 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:35.870 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:37:36.129 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:37:36.129 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:37:36.129 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:37:36.129 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:37:36.129 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:36.388 Validate MD5 checksum, iteration 1 00:37:36.388 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:37:36.388 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:37:36.388 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:37:36.388 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:37:36.388 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:37:36.388 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:36.388 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:37:36.388 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:36.388 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:36.388 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:36.388 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:36.388 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:36.388 12:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:36.388 [2024-07-10 12:39:45.754454] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:37:36.388 [2024-07-10 12:39:45.754746] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86664 ] 00:37:36.647 [2024-07-10 12:39:45.923050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.909 [2024-07-10 12:39:46.204693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:41.026  Copying: 718/1024 [MB] (718 MBps) Copying: 1024/1024 [MB] (average 697 MBps) 00:37:41.026 00:37:41.026 12:39:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:37:41.026 12:39:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:42.931 12:39:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:37:42.931 Validate MD5 checksum, iteration 2 00:37:42.931 12:39:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=17d153b8afec25cdc0edf85a3efefdcf 00:37:42.931 12:39:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 17d153b8afec25cdc0edf85a3efefdcf != \1\7\d\1\5\3\b\8\a\f\e\c\2\5\c\d\c\0\e\d\f\8\5\a\3\e\f\e\f\d\c\f ]] 00:37:42.931 12:39:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:37:42.931 12:39:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:42.931 12:39:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:37:42.931 12:39:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:42.931 12:39:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:42.931 12:39:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:42.931 12:39:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:42.931 12:39:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:42.931 12:39:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:42.931 [2024-07-10 12:39:52.172618] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:37:42.932 [2024-07-10 12:39:52.172981] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86733 ] 00:37:42.932 [2024-07-10 12:39:52.345537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:43.190 [2024-07-10 12:39:52.620522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:49.864  Copying: 683/1024 [MB] (683 MBps) Copying: 1024/1024 [MB] (average 690 MBps) 00:37:49.864 00:37:49.864 12:39:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:37:49.864 12:39:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ef7a1f9146c9f138b6e8816b9015cbbd 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ef7a1f9146c9f138b6e8816b9015cbbd != \e\f\7\a\1\f\9\1\4\6\c\9\f\1\3\8\b\6\e\8\8\1\6\b\9\0\1\5\c\b\b\d ]] 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 86576 ]] 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 86576 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86822 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86822 00:37:51.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86822 ']' 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:51.242 12:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:51.242 [2024-07-10 12:40:00.468264] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:37:51.243 [2024-07-10 12:40:00.468398] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86822 ] 00:37:51.243 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 86576 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:37:51.243 [2024-07-10 12:40:00.626877] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:51.501 [2024-07-10 12:40:00.870205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:52.439 [2024-07-10 12:40:01.860145] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:52.439 [2024-07-10 12:40:01.860232] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:52.700 [2024-07-10 12:40:02.008521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.700 [2024-07-10 12:40:02.008591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:37:52.700 [2024-07-10 12:40:02.008626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:37:52.700 [2024-07-10 12:40:02.008642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.700 [2024-07-10 12:40:02.008725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.700 [2024-07-10 12:40:02.008768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:52.700 [2024-07-10 12:40:02.008787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:37:52.700 [2024-07-10 12:40:02.008804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.700 [2024-07-10 12:40:02.008843] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:37:52.700 [2024-07-10 12:40:02.009913] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:37:52.700 [2024-07-10 12:40:02.009956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.700 [2024-07-10 12:40:02.009974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:52.700 [2024-07-10 12:40:02.009992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.120 ms 00:37:52.700 [2024-07-10 12:40:02.010007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.700 [2024-07-10 12:40:02.010597] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:37:52.700 [2024-07-10 12:40:02.037325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.700 [2024-07-10 12:40:02.037377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:37:52.700 [2024-07-10 12:40:02.037403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.774 ms 00:37:52.700 [2024-07-10 12:40:02.037427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.700 [2024-07-10 12:40:02.053138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.700 [2024-07-10 12:40:02.053191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:37:52.700 [2024-07-10 12:40:02.053218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:37:52.700 [2024-07-10 12:40:02.053233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.700 [2024-07-10 12:40:02.053952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.700 [2024-07-10 12:40:02.053993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:52.700 [2024-07-10 12:40:02.054022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.585 ms 00:37:52.700 [2024-07-10 12:40:02.054040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.700 [2024-07-10 12:40:02.054135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.700 [2024-07-10 12:40:02.054161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:52.700 [2024-07-10 12:40:02.054181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:37:52.700 [2024-07-10 12:40:02.054200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.700 [2024-07-10 12:40:02.054257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.700 [2024-07-10 12:40:02.054278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:37:52.700 [2024-07-10 12:40:02.054296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:37:52.700 [2024-07-10 12:40:02.054317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.700 [2024-07-10 12:40:02.054369] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:37:52.700 [2024-07-10 12:40:02.059299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.700 [2024-07-10 12:40:02.059342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:52.700 [2024-07-10 12:40:02.059363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.950 ms 00:37:52.700 [2024-07-10 12:40:02.059379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.700 [2024-07-10 12:40:02.059434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.700 [2024-07-10 12:40:02.059453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:37:52.700 [2024-07-10 12:40:02.059471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:37:52.700 [2024-07-10 12:40:02.059486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.700 [2024-07-10 12:40:02.059546] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:37:52.700 [2024-07-10 12:40:02.059582] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:37:52.700 [2024-07-10 12:40:02.059636] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:37:52.700 [2024-07-10 12:40:02.059665] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:37:52.700 [2024-07-10 12:40:02.059785] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:37:52.700 [2024-07-10 12:40:02.059810] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:37:52.700 [2024-07-10 12:40:02.059830] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:37:52.700 [2024-07-10 12:40:02.059853] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:37:52.700 [2024-07-10 12:40:02.059891] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:37:52.700 [2024-07-10 12:40:02.059910] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:37:52.700 [2024-07-10 12:40:02.059926] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:37:52.700 [2024-07-10 12:40:02.059947] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:37:52.700 [2024-07-10 12:40:02.059962] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:37:52.700 [2024-07-10 12:40:02.059981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.700 [2024-07-10 12:40:02.059998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:37:52.700 [2024-07-10 12:40:02.060021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.439 ms 00:37:52.700 [2024-07-10 12:40:02.060037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.700 [2024-07-10 12:40:02.060149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.700 [2024-07-10 12:40:02.060169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:37:52.700 [2024-07-10 12:40:02.060186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:37:52.700 [2024-07-10 12:40:02.060203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.700 [2024-07-10 12:40:02.060325] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:37:52.700 [2024-07-10 12:40:02.060347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:37:52.700 [2024-07-10 12:40:02.060361] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:52.700 [2024-07-10 12:40:02.060375] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:52.700 [2024-07-10 12:40:02.060389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:37:52.700 [2024-07-10 12:40:02.060405] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:37:52.700 [2024-07-10 12:40:02.060420] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:37:52.700 [2024-07-10 12:40:02.060431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:37:52.700 [2024-07-10 12:40:02.060448] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:37:52.700 [2024-07-10 12:40:02.060466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:52.700 [2024-07-10 12:40:02.060484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:37:52.700 [2024-07-10 12:40:02.060500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:37:52.700 [2024-07-10 12:40:02.060517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:52.700 [2024-07-10 12:40:02.060532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:37:52.700 [2024-07-10 12:40:02.060550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:37:52.700 [2024-07-10 12:40:02.060565] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:52.700 [2024-07-10 12:40:02.060580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:37:52.701 [2024-07-10 12:40:02.060596] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:37:52.701 [2024-07-10 12:40:02.060609] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:52.701 [2024-07-10 12:40:02.060623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:37:52.701 [2024-07-10 12:40:02.060637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:37:52.701 [2024-07-10 12:40:02.060654] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:52.701 [2024-07-10 12:40:02.060673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:37:52.701 [2024-07-10 12:40:02.060687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:37:52.701 [2024-07-10 12:40:02.060704] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:52.701 [2024-07-10 12:40:02.060721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:37:52.701 [2024-07-10 12:40:02.060756] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:37:52.701 [2024-07-10 12:40:02.060773] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:52.701 [2024-07-10 12:40:02.060791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:37:52.701 [2024-07-10 12:40:02.060807] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:37:52.701 [2024-07-10 12:40:02.060822] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:52.701 [2024-07-10 12:40:02.060837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:37:52.701 [2024-07-10 12:40:02.060854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:37:52.701 [2024-07-10 12:40:02.060870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:52.701 [2024-07-10 12:40:02.060888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:37:52.701 [2024-07-10 12:40:02.060904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:37:52.701 [2024-07-10 12:40:02.060921] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:52.701 [2024-07-10 12:40:02.060938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:37:52.701 [2024-07-10 12:40:02.060955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:37:52.701 [2024-07-10 12:40:02.060972] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:52.701 [2024-07-10 12:40:02.060987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:37:52.701 [2024-07-10 12:40:02.061001] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:37:52.701 [2024-07-10 12:40:02.061017] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:52.701 [2024-07-10 12:40:02.061031] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:37:52.701 [2024-07-10 12:40:02.061047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:37:52.701 [2024-07-10 12:40:02.061064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:52.701 [2024-07-10 12:40:02.061081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:52.701 [2024-07-10 12:40:02.061097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:37:52.701 [2024-07-10 12:40:02.061115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:37:52.701 [2024-07-10 12:40:02.061150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:37:52.701 [2024-07-10 12:40:02.061163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:37:52.701 [2024-07-10 12:40:02.061175] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:37:52.701 [2024-07-10 12:40:02.061189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:37:52.701 [2024-07-10 12:40:02.061206] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:37:52.701 [2024-07-10 12:40:02.061236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:52.701 [2024-07-10 12:40:02.061254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:37:52.701 [2024-07-10 12:40:02.061273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:37:52.701 [2024-07-10 12:40:02.061289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:37:52.701 [2024-07-10 12:40:02.061304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:37:52.701 [2024-07-10 12:40:02.061323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:37:52.701 [2024-07-10 12:40:02.061341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:37:52.701 [2024-07-10 12:40:02.061358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:37:52.701 [2024-07-10 12:40:02.061378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:37:52.701 [2024-07-10 12:40:02.061393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:37:52.701 [2024-07-10 12:40:02.061411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:37:52.701 [2024-07-10 12:40:02.061430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:37:52.701 [2024-07-10 12:40:02.061447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:37:52.701 [2024-07-10 12:40:02.061467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:37:52.701 [2024-07-10 12:40:02.061486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:37:52.701 [2024-07-10 12:40:02.061504] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:37:52.701 [2024-07-10 12:40:02.061521] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:52.701 [2024-07-10 12:40:02.061540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:52.701 [2024-07-10 12:40:02.061554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:37:52.701 [2024-07-10 12:40:02.061568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:37:52.701 [2024-07-10 12:40:02.061587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:37:52.701 [2024-07-10 12:40:02.061608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.701 [2024-07-10 12:40:02.061622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:37:52.701 [2024-07-10 12:40:02.061642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.345 ms 00:37:52.701 [2024-07-10 12:40:02.061661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.701 [2024-07-10 12:40:02.108323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.701 [2024-07-10 12:40:02.108383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:52.701 [2024-07-10 12:40:02.108411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.620 ms 00:37:52.701 [2024-07-10 12:40:02.108427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.701 [2024-07-10 12:40:02.108516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.701 [2024-07-10 12:40:02.108535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:37:52.701 [2024-07-10 12:40:02.108556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:37:52.701 [2024-07-10 12:40:02.108579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.701 [2024-07-10 12:40:02.156392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.701 [2024-07-10 12:40:02.156455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:52.701 [2024-07-10 12:40:02.156483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.772 ms 00:37:52.701 [2024-07-10 12:40:02.156498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.701 [2024-07-10 12:40:02.156594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.701 [2024-07-10 12:40:02.156617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:52.701 [2024-07-10 12:40:02.156635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:37:52.701 [2024-07-10 12:40:02.156650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.701 [2024-07-10 12:40:02.156859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.701 [2024-07-10 12:40:02.156885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:52.701 [2024-07-10 12:40:02.156904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.102 ms 00:37:52.701 [2024-07-10 12:40:02.156922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.701 [2024-07-10 12:40:02.156990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.701 [2024-07-10 12:40:02.157009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:52.701 [2024-07-10 12:40:02.157032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:37:52.701 [2024-07-10 12:40:02.157049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.961 [2024-07-10 12:40:02.179880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.961 [2024-07-10 12:40:02.179945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:52.961 [2024-07-10 12:40:02.179987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.828 ms 00:37:52.961 [2024-07-10 12:40:02.180004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.961 [2024-07-10 12:40:02.180217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.961 [2024-07-10 12:40:02.180242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:37:52.961 [2024-07-10 12:40:02.180262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:37:52.961 [2024-07-10 12:40:02.180280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.961 [2024-07-10 12:40:02.216107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.961 [2024-07-10 12:40:02.216188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:37:52.961 [2024-07-10 12:40:02.216215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.815 ms 00:37:52.961 [2024-07-10 12:40:02.216232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.961 [2024-07-10 12:40:02.232604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.961 [2024-07-10 12:40:02.232679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:37:52.961 [2024-07-10 12:40:02.232707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.686 ms 00:37:52.961 [2024-07-10 12:40:02.232723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.961 [2024-07-10 12:40:02.325755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.961 [2024-07-10 12:40:02.325840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:37:52.961 [2024-07-10 12:40:02.325869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 93.008 ms 00:37:52.961 [2024-07-10 12:40:02.325887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.961 [2024-07-10 12:40:02.326238] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:37:52.961 [2024-07-10 12:40:02.326496] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:37:52.961 [2024-07-10 12:40:02.326701] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:37:52.961 [2024-07-10 12:40:02.326955] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:37:52.961 [2024-07-10 12:40:02.326990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.961 [2024-07-10 12:40:02.327011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:37:52.961 [2024-07-10 12:40:02.327032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.993 ms 00:37:52.961 [2024-07-10 12:40:02.327051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.961 [2024-07-10 12:40:02.327207] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:37:52.961 [2024-07-10 12:40:02.327235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.961 [2024-07-10 12:40:02.327252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:37:52.961 [2024-07-10 12:40:02.327270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:37:52.961 [2024-07-10 12:40:02.327288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.961 [2024-07-10 12:40:02.352784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.961 [2024-07-10 12:40:02.352865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:37:52.961 [2024-07-10 12:40:02.352895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.494 ms 00:37:52.961 [2024-07-10 12:40:02.352920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.961 [2024-07-10 12:40:02.369279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:52.961 [2024-07-10 12:40:02.369347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:37:52.961 [2024-07-10 12:40:02.369375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:37:52.961 [2024-07-10 12:40:02.369390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:52.961 [2024-07-10 12:40:02.369913] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:37:53.529 [2024-07-10 12:40:02.919988] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:37:53.529 [2024-07-10 12:40:02.920179] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:37:54.095 [2024-07-10 12:40:03.454615] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:37:54.095 [2024-07-10 12:40:03.454754] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:37:54.095 [2024-07-10 12:40:03.454785] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:37:54.095 [2024-07-10 12:40:03.454801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.095 [2024-07-10 12:40:03.454814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:37:54.095 [2024-07-10 12:40:03.454832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1087.026 ms 00:37:54.095 [2024-07-10 12:40:03.454842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.095 [2024-07-10 12:40:03.454880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.095 [2024-07-10 12:40:03.454892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:37:54.095 [2024-07-10 12:40:03.454906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:54.095 [2024-07-10 12:40:03.454916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.095 [2024-07-10 12:40:03.467624] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:37:54.095 [2024-07-10 12:40:03.467807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.095 [2024-07-10 12:40:03.467828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:37:54.095 [2024-07-10 12:40:03.467842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.894 ms 00:37:54.095 [2024-07-10 12:40:03.467852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.095 [2024-07-10 12:40:03.468455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.095 [2024-07-10 12:40:03.468485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:37:54.095 [2024-07-10 12:40:03.468498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.517 ms 00:37:54.095 [2024-07-10 12:40:03.468509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.095 [2024-07-10 12:40:03.470433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.095 [2024-07-10 12:40:03.470462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:37:54.095 [2024-07-10 12:40:03.470475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.908 ms 00:37:54.095 [2024-07-10 12:40:03.470486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.095 [2024-07-10 12:40:03.470529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.095 [2024-07-10 12:40:03.470540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:37:54.095 [2024-07-10 12:40:03.470552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:54.095 [2024-07-10 12:40:03.470562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.095 [2024-07-10 12:40:03.470662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.095 [2024-07-10 12:40:03.470674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:37:54.095 [2024-07-10 12:40:03.470689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:37:54.095 [2024-07-10 12:40:03.470700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.095 [2024-07-10 12:40:03.470722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.095 [2024-07-10 12:40:03.470747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:37:54.095 [2024-07-10 12:40:03.470763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:37:54.095 [2024-07-10 12:40:03.470773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.095 [2024-07-10 12:40:03.470804] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:37:54.095 [2024-07-10 12:40:03.470816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.095 [2024-07-10 12:40:03.470826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:37:54.095 [2024-07-10 12:40:03.470837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:37:54.095 [2024-07-10 12:40:03.470850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.095 [2024-07-10 12:40:03.470903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.095 [2024-07-10 12:40:03.470914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:37:54.095 [2024-07-10 12:40:03.470925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:37:54.095 [2024-07-10 12:40:03.470934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.095 [2024-07-10 12:40:03.472229] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1465.590 ms, result 0 00:37:54.095 [2024-07-10 12:40:03.484568] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:54.096 [2024-07-10 12:40:03.500533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:37:54.096 [2024-07-10 12:40:03.510583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:37:54.096 Validate MD5 checksum, iteration 1 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:54.096 12:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:54.355 [2024-07-10 12:40:03.640802] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:37:54.355 [2024-07-10 12:40:03.641367] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86857 ] 00:37:54.355 [2024-07-10 12:40:03.812476] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.613 [2024-07-10 12:40:04.079038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:59.674  Copying: 680/1024 [MB] (680 MBps) Copying: 1024/1024 [MB] (average 675 MBps) 00:37:59.674 00:37:59.674 12:40:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:37:59.674 12:40:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:01.576 12:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:38:01.576 Validate MD5 checksum, iteration 2 00:38:01.576 12:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=17d153b8afec25cdc0edf85a3efefdcf 00:38:01.576 12:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 17d153b8afec25cdc0edf85a3efefdcf != \1\7\d\1\5\3\b\8\a\f\e\c\2\5\c\d\c\0\e\d\f\8\5\a\3\e\f\e\f\d\c\f ]] 00:38:01.576 12:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:38:01.576 12:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:01.576 12:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:38:01.577 12:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:01.577 12:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:01.577 12:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:01.577 12:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:01.577 12:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:01.577 12:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:01.577 [2024-07-10 12:40:10.968521] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:38:01.577 [2024-07-10 12:40:10.968899] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86935 ] 00:38:01.835 [2024-07-10 12:40:11.139518] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.093 [2024-07-10 12:40:11.391931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:06.156  Copying: 703/1024 [MB] (703 MBps) Copying: 1024/1024 [MB] (average 678 MBps) 00:38:06.156 00:38:06.156 12:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:38:06.156 12:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ef7a1f9146c9f138b6e8816b9015cbbd 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ef7a1f9146c9f138b6e8816b9015cbbd != \e\f\7\a\1\f\9\1\4\6\c\9\f\1\3\8\b\6\e\8\8\1\6\b\9\0\1\5\c\b\b\d ]] 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86822 ]] 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86822 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86822 ']' 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86822 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86822 00:38:08.083 killing process with pid 86822 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86822' 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86822 00:38:08.083 12:40:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86822 00:38:09.017 [2024-07-10 12:40:18.455870] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:38:09.017 [2024-07-10 12:40:18.476210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.018 [2024-07-10 12:40:18.476254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:38:09.018 [2024-07-10 12:40:18.476272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:38:09.018 [2024-07-10 12:40:18.476284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.018 [2024-07-10 12:40:18.476307] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:38:09.018 [2024-07-10 12:40:18.480139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.018 [2024-07-10 12:40:18.480169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:38:09.018 [2024-07-10 12:40:18.480183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.816 ms 00:38:09.018 [2024-07-10 12:40:18.480193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.018 [2024-07-10 12:40:18.480396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.018 [2024-07-10 12:40:18.480409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:38:09.018 [2024-07-10 12:40:18.480426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.178 ms 00:38:09.018 [2024-07-10 12:40:18.480437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.018 [2024-07-10 12:40:18.481884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.018 [2024-07-10 12:40:18.481920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:38:09.018 [2024-07-10 12:40:18.481933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.432 ms 00:38:09.018 [2024-07-10 12:40:18.481944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.018 [2024-07-10 12:40:18.482959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.018 [2024-07-10 12:40:18.482982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:38:09.018 [2024-07-10 12:40:18.482994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.980 ms 00:38:09.018 [2024-07-10 12:40:18.483012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.277 [2024-07-10 12:40:18.499399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.277 [2024-07-10 12:40:18.499451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:38:09.277 [2024-07-10 12:40:18.499469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.335 ms 00:38:09.277 [2024-07-10 12:40:18.499480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.277 [2024-07-10 12:40:18.508236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.277 [2024-07-10 12:40:18.508286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:38:09.277 [2024-07-10 12:40:18.508312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.724 ms 00:38:09.277 [2024-07-10 12:40:18.508323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.277 [2024-07-10 12:40:18.508438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.277 [2024-07-10 12:40:18.508453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:38:09.277 [2024-07-10 12:40:18.508467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:38:09.277 [2024-07-10 12:40:18.508478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.277 [2024-07-10 12:40:18.524256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.277 [2024-07-10 12:40:18.524294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:38:09.277 [2024-07-10 12:40:18.524308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.784 ms 00:38:09.277 [2024-07-10 12:40:18.524318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.277 [2024-07-10 12:40:18.539511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.277 [2024-07-10 12:40:18.539545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:38:09.277 [2024-07-10 12:40:18.539558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.181 ms 00:38:09.277 [2024-07-10 12:40:18.539569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.277 [2024-07-10 12:40:18.554991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.277 [2024-07-10 12:40:18.555048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:38:09.277 [2024-07-10 12:40:18.555066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.407 ms 00:38:09.277 [2024-07-10 12:40:18.555078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.277 [2024-07-10 12:40:18.570369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.277 [2024-07-10 12:40:18.570409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:38:09.277 [2024-07-10 12:40:18.570423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.224 ms 00:38:09.277 [2024-07-10 12:40:18.570433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.277 [2024-07-10 12:40:18.570470] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:38:09.277 [2024-07-10 12:40:18.570488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:38:09.277 [2024-07-10 12:40:18.570502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:38:09.277 [2024-07-10 12:40:18.570513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:38:09.277 [2024-07-10 12:40:18.570525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:09.277 [2024-07-10 12:40:18.570537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:09.277 [2024-07-10 12:40:18.570548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:09.277 [2024-07-10 12:40:18.570559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:09.277 [2024-07-10 12:40:18.570570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:09.277 [2024-07-10 12:40:18.570581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:09.277 [2024-07-10 12:40:18.570591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:09.277 [2024-07-10 12:40:18.570602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:09.277 [2024-07-10 12:40:18.570613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:09.277 [2024-07-10 12:40:18.570624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:09.277 [2024-07-10 12:40:18.570634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:09.277 [2024-07-10 12:40:18.570645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:09.277 [2024-07-10 12:40:18.570655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:09.277 [2024-07-10 12:40:18.570666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:09.277 [2024-07-10 12:40:18.570676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:09.278 [2024-07-10 12:40:18.570689] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:38:09.278 [2024-07-10 12:40:18.570718] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b62d77fa-7044-4896-ab6a-6722d4d819d0 00:38:09.278 [2024-07-10 12:40:18.570745] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:38:09.278 [2024-07-10 12:40:18.570760] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:38:09.278 [2024-07-10 12:40:18.570770] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:38:09.278 [2024-07-10 12:40:18.570781] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:38:09.278 [2024-07-10 12:40:18.570790] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:38:09.278 [2024-07-10 12:40:18.570800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:38:09.278 [2024-07-10 12:40:18.570811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:38:09.278 [2024-07-10 12:40:18.570822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:38:09.278 [2024-07-10 12:40:18.570831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:38:09.278 [2024-07-10 12:40:18.570842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.278 [2024-07-10 12:40:18.570853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:38:09.278 [2024-07-10 12:40:18.570864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.375 ms 00:38:09.278 [2024-07-10 12:40:18.570875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.278 [2024-07-10 12:40:18.591088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.278 [2024-07-10 12:40:18.591125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:38:09.278 [2024-07-10 12:40:18.591139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.224 ms 00:38:09.278 [2024-07-10 12:40:18.591151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.278 [2024-07-10 12:40:18.591662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:09.278 [2024-07-10 12:40:18.591679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:38:09.278 [2024-07-10 12:40:18.591692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.464 ms 00:38:09.278 [2024-07-10 12:40:18.591703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.278 [2024-07-10 12:40:18.655034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:09.278 [2024-07-10 12:40:18.655104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:38:09.278 [2024-07-10 12:40:18.655121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:09.278 [2024-07-10 12:40:18.655133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.278 [2024-07-10 12:40:18.655190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:09.278 [2024-07-10 12:40:18.655202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:38:09.278 [2024-07-10 12:40:18.655214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:09.278 [2024-07-10 12:40:18.655224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.278 [2024-07-10 12:40:18.655338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:09.278 [2024-07-10 12:40:18.655351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:38:09.278 [2024-07-10 12:40:18.655362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:09.278 [2024-07-10 12:40:18.655372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.278 [2024-07-10 12:40:18.655391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:09.278 [2024-07-10 12:40:18.655402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:38:09.278 [2024-07-10 12:40:18.655415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:09.278 [2024-07-10 12:40:18.655424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.537 [2024-07-10 12:40:18.775805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:09.537 [2024-07-10 12:40:18.775873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:38:09.537 [2024-07-10 12:40:18.775890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:09.537 [2024-07-10 12:40:18.775902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.537 [2024-07-10 12:40:18.879710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:09.537 [2024-07-10 12:40:18.879799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:38:09.537 [2024-07-10 12:40:18.879819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:09.537 [2024-07-10 12:40:18.879832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.537 [2024-07-10 12:40:18.879949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:09.537 [2024-07-10 12:40:18.879962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:38:09.537 [2024-07-10 12:40:18.879975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:09.537 [2024-07-10 12:40:18.879986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.537 [2024-07-10 12:40:18.880035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:09.537 [2024-07-10 12:40:18.880048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:38:09.537 [2024-07-10 12:40:18.880060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:09.537 [2024-07-10 12:40:18.880070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.537 [2024-07-10 12:40:18.880206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:09.537 [2024-07-10 12:40:18.880229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:38:09.537 [2024-07-10 12:40:18.880241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:09.537 [2024-07-10 12:40:18.880252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.537 [2024-07-10 12:40:18.880293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:09.537 [2024-07-10 12:40:18.880307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:38:09.537 [2024-07-10 12:40:18.880318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:09.537 [2024-07-10 12:40:18.880330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.537 [2024-07-10 12:40:18.880371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:09.537 [2024-07-10 12:40:18.880388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:38:09.537 [2024-07-10 12:40:18.880400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:09.537 [2024-07-10 12:40:18.880412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.537 [2024-07-10 12:40:18.880461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:09.537 [2024-07-10 12:40:18.880473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:38:09.537 [2024-07-10 12:40:18.880485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:09.537 [2024-07-10 12:40:18.880496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:09.537 [2024-07-10 12:40:18.880631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 405.042 ms, result 0 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:38:11.437 Remove shared memory files 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid86576 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:38:11.437 00:38:11.437 real 1m33.707s 00:38:11.437 user 2m10.527s 00:38:11.437 sys 0m22.280s 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:11.437 12:40:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:38:11.437 ************************************ 00:38:11.437 END TEST ftl_upgrade_shutdown 00:38:11.437 ************************************ 00:38:11.437 12:40:20 ftl -- common/autotest_common.sh@1142 -- # return 0 00:38:11.437 12:40:20 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:38:11.437 12:40:20 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:38:11.437 12:40:20 ftl -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:38:11.437 12:40:20 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:11.437 12:40:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:11.437 ************************************ 00:38:11.437 START TEST ftl_restore_fast 00:38:11.437 ************************************ 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:38:11.437 * Looking for test storage... 00:38:11.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:11.437 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.vKVK0JABXu 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=87109 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 87109 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- common/autotest_common.sh@829 -- # '[' -z 87109 ']' 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:11.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:11.438 12:40:20 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:38:11.438 [2024-07-10 12:40:20.841759] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:38:11.438 [2024-07-10 12:40:20.842632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87109 ] 00:38:11.696 [2024-07-10 12:40:21.018305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.954 [2024-07-10 12:40:21.275848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.887 12:40:22 ftl.ftl_restore_fast -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:12.887 12:40:22 ftl.ftl_restore_fast -- common/autotest_common.sh@862 -- # return 0 00:38:12.887 12:40:22 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:38:12.887 12:40:22 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:38:12.887 12:40:22 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:38:12.887 12:40:22 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:38:12.887 12:40:22 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:38:12.887 12:40:22 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:38:13.145 12:40:22 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:38:13.145 12:40:22 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:38:13.145 12:40:22 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:38:13.145 12:40:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:38:13.145 12:40:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:38:13.145 12:40:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:38:13.145 12:40:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:38:13.145 12:40:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:38:13.402 12:40:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:38:13.402 { 00:38:13.402 "name": "nvme0n1", 00:38:13.402 "aliases": [ 00:38:13.402 "15ef44ba-3de6-4291-a2f9-cc5270816efb" 00:38:13.403 ], 00:38:13.403 "product_name": "NVMe disk", 00:38:13.403 "block_size": 4096, 00:38:13.403 "num_blocks": 1310720, 00:38:13.403 "uuid": "15ef44ba-3de6-4291-a2f9-cc5270816efb", 00:38:13.403 "assigned_rate_limits": { 00:38:13.403 "rw_ios_per_sec": 0, 00:38:13.403 "rw_mbytes_per_sec": 0, 00:38:13.403 "r_mbytes_per_sec": 0, 00:38:13.403 "w_mbytes_per_sec": 0 00:38:13.403 }, 00:38:13.403 "claimed": true, 00:38:13.403 "claim_type": "read_many_write_one", 00:38:13.403 "zoned": false, 00:38:13.403 "supported_io_types": { 00:38:13.403 "read": true, 00:38:13.403 "write": true, 00:38:13.403 "unmap": true, 00:38:13.403 "flush": true, 00:38:13.403 "reset": true, 00:38:13.403 "nvme_admin": true, 00:38:13.403 "nvme_io": true, 00:38:13.403 "nvme_io_md": false, 00:38:13.403 "write_zeroes": true, 00:38:13.403 "zcopy": false, 00:38:13.403 "get_zone_info": false, 00:38:13.403 "zone_management": false, 00:38:13.403 "zone_append": false, 00:38:13.403 "compare": true, 00:38:13.403 "compare_and_write": false, 00:38:13.403 "abort": true, 00:38:13.403 "seek_hole": false, 00:38:13.403 "seek_data": false, 00:38:13.403 "copy": true, 00:38:13.403 "nvme_iov_md": false 00:38:13.403 }, 00:38:13.403 "driver_specific": { 00:38:13.403 "nvme": [ 00:38:13.403 { 00:38:13.403 "pci_address": "0000:00:11.0", 00:38:13.403 "trid": { 00:38:13.403 "trtype": "PCIe", 00:38:13.403 "traddr": "0000:00:11.0" 00:38:13.403 }, 00:38:13.403 "ctrlr_data": { 00:38:13.403 "cntlid": 0, 00:38:13.403 "vendor_id": "0x1b36", 00:38:13.403 "model_number": "QEMU NVMe Ctrl", 00:38:13.403 "serial_number": "12341", 00:38:13.403 "firmware_revision": "8.0.0", 00:38:13.403 "subnqn": "nqn.2019-08.org.qemu:12341", 00:38:13.403 "oacs": { 00:38:13.403 "security": 0, 00:38:13.403 "format": 1, 00:38:13.403 "firmware": 0, 00:38:13.403 "ns_manage": 1 00:38:13.403 }, 00:38:13.403 "multi_ctrlr": false, 00:38:13.403 "ana_reporting": false 00:38:13.403 }, 00:38:13.403 "vs": { 00:38:13.403 "nvme_version": "1.4" 00:38:13.403 }, 00:38:13.403 "ns_data": { 00:38:13.403 "id": 1, 00:38:13.403 "can_share": false 00:38:13.403 } 00:38:13.403 } 00:38:13.403 ], 00:38:13.403 "mp_policy": "active_passive" 00:38:13.403 } 00:38:13.403 } 00:38:13.403 ]' 00:38:13.403 12:40:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:38:13.403 12:40:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:38:13.403 12:40:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:38:13.661 12:40:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:38:13.661 12:40:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:38:13.661 12:40:22 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:38:13.661 12:40:22 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:38:13.661 12:40:22 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:38:13.661 12:40:22 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:38:13.661 12:40:22 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:13.661 12:40:22 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:38:13.918 12:40:23 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=88d3ac39-d473-4272-9652-09bb434bf184 00:38:13.918 12:40:23 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:38:13.918 12:40:23 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 88d3ac39-d473-4272-9652-09bb434bf184 00:38:13.918 12:40:23 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:38:14.176 12:40:23 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=ce8528b3-773e-42f2-8d49-ed35d174ad86 00:38:14.176 12:40:23 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ce8528b3-773e-42f2-8d49-ed35d174ad86 00:38:14.434 12:40:23 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=7f6ae11c-5ed8-4579-a861-d3196ac9cdbe 00:38:14.434 12:40:23 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:38:14.434 12:40:23 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7f6ae11c-5ed8-4579-a861-d3196ac9cdbe 00:38:14.434 12:40:23 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:38:14.434 12:40:23 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:38:14.434 12:40:23 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=7f6ae11c-5ed8-4579-a861-d3196ac9cdbe 00:38:14.434 12:40:23 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:38:14.434 12:40:23 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 7f6ae11c-5ed8-4579-a861-d3196ac9cdbe 00:38:14.434 12:40:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=7f6ae11c-5ed8-4579-a861-d3196ac9cdbe 00:38:14.434 12:40:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:38:14.434 12:40:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:38:14.434 12:40:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:38:14.434 12:40:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f6ae11c-5ed8-4579-a861-d3196ac9cdbe 00:38:14.691 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:38:14.691 { 00:38:14.691 "name": "7f6ae11c-5ed8-4579-a861-d3196ac9cdbe", 00:38:14.691 "aliases": [ 00:38:14.691 "lvs/nvme0n1p0" 00:38:14.691 ], 00:38:14.691 "product_name": "Logical Volume", 00:38:14.691 "block_size": 4096, 00:38:14.691 "num_blocks": 26476544, 00:38:14.691 "uuid": "7f6ae11c-5ed8-4579-a861-d3196ac9cdbe", 00:38:14.691 "assigned_rate_limits": { 00:38:14.691 "rw_ios_per_sec": 0, 00:38:14.691 "rw_mbytes_per_sec": 0, 00:38:14.691 "r_mbytes_per_sec": 0, 00:38:14.691 "w_mbytes_per_sec": 0 00:38:14.691 }, 00:38:14.691 "claimed": false, 00:38:14.691 "zoned": false, 00:38:14.691 "supported_io_types": { 00:38:14.691 "read": true, 00:38:14.691 "write": true, 00:38:14.691 "unmap": true, 00:38:14.691 "flush": false, 00:38:14.691 "reset": true, 00:38:14.691 "nvme_admin": false, 00:38:14.691 "nvme_io": false, 00:38:14.691 "nvme_io_md": false, 00:38:14.691 "write_zeroes": true, 00:38:14.691 "zcopy": false, 00:38:14.691 "get_zone_info": false, 00:38:14.691 "zone_management": false, 00:38:14.691 "zone_append": false, 00:38:14.691 "compare": false, 00:38:14.691 "compare_and_write": false, 00:38:14.691 "abort": false, 00:38:14.691 "seek_hole": true, 00:38:14.691 "seek_data": true, 00:38:14.691 "copy": false, 00:38:14.691 "nvme_iov_md": false 00:38:14.691 }, 00:38:14.691 "driver_specific": { 00:38:14.691 "lvol": { 00:38:14.692 "lvol_store_uuid": "ce8528b3-773e-42f2-8d49-ed35d174ad86", 00:38:14.692 "base_bdev": "nvme0n1", 00:38:14.692 "thin_provision": true, 00:38:14.692 "num_allocated_clusters": 0, 00:38:14.692 "snapshot": false, 00:38:14.692 "clone": false, 00:38:14.692 "esnap_clone": false 00:38:14.692 } 00:38:14.692 } 00:38:14.692 } 00:38:14.692 ]' 00:38:14.692 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:38:14.692 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:38:14.692 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:38:14.692 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:38:14.692 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:38:14.692 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:38:14.692 12:40:24 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:38:14.692 12:40:24 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:38:14.692 12:40:24 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:38:14.950 12:40:24 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:38:14.950 12:40:24 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:38:14.950 12:40:24 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 7f6ae11c-5ed8-4579-a861-d3196ac9cdbe 00:38:14.950 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=7f6ae11c-5ed8-4579-a861-d3196ac9cdbe 00:38:14.950 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:38:14.950 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:38:14.950 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:38:14.950 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f6ae11c-5ed8-4579-a861-d3196ac9cdbe 00:38:15.210 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:38:15.210 { 00:38:15.210 "name": "7f6ae11c-5ed8-4579-a861-d3196ac9cdbe", 00:38:15.210 "aliases": [ 00:38:15.210 "lvs/nvme0n1p0" 00:38:15.210 ], 00:38:15.210 "product_name": "Logical Volume", 00:38:15.210 "block_size": 4096, 00:38:15.210 "num_blocks": 26476544, 00:38:15.210 "uuid": "7f6ae11c-5ed8-4579-a861-d3196ac9cdbe", 00:38:15.210 "assigned_rate_limits": { 00:38:15.210 "rw_ios_per_sec": 0, 00:38:15.210 "rw_mbytes_per_sec": 0, 00:38:15.210 "r_mbytes_per_sec": 0, 00:38:15.210 "w_mbytes_per_sec": 0 00:38:15.210 }, 00:38:15.210 "claimed": false, 00:38:15.210 "zoned": false, 00:38:15.210 "supported_io_types": { 00:38:15.210 "read": true, 00:38:15.210 "write": true, 00:38:15.210 "unmap": true, 00:38:15.210 "flush": false, 00:38:15.210 "reset": true, 00:38:15.210 "nvme_admin": false, 00:38:15.210 "nvme_io": false, 00:38:15.210 "nvme_io_md": false, 00:38:15.210 "write_zeroes": true, 00:38:15.210 "zcopy": false, 00:38:15.210 "get_zone_info": false, 00:38:15.210 "zone_management": false, 00:38:15.210 "zone_append": false, 00:38:15.210 "compare": false, 00:38:15.210 "compare_and_write": false, 00:38:15.210 "abort": false, 00:38:15.210 "seek_hole": true, 00:38:15.210 "seek_data": true, 00:38:15.210 "copy": false, 00:38:15.210 "nvme_iov_md": false 00:38:15.210 }, 00:38:15.210 "driver_specific": { 00:38:15.210 "lvol": { 00:38:15.210 "lvol_store_uuid": "ce8528b3-773e-42f2-8d49-ed35d174ad86", 00:38:15.210 "base_bdev": "nvme0n1", 00:38:15.210 "thin_provision": true, 00:38:15.210 "num_allocated_clusters": 0, 00:38:15.210 "snapshot": false, 00:38:15.210 "clone": false, 00:38:15.210 "esnap_clone": false 00:38:15.210 } 00:38:15.210 } 00:38:15.210 } 00:38:15.210 ]' 00:38:15.210 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:38:15.210 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:38:15.210 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:38:15.210 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:38:15.210 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:38:15.210 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:38:15.210 12:40:24 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:38:15.210 12:40:24 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:38:15.468 12:40:24 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:38:15.468 12:40:24 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 7f6ae11c-5ed8-4579-a861-d3196ac9cdbe 00:38:15.468 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=7f6ae11c-5ed8-4579-a861-d3196ac9cdbe 00:38:15.468 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:38:15.468 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:38:15.468 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:38:15.468 12:40:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f6ae11c-5ed8-4579-a861-d3196ac9cdbe 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:38:15.728 { 00:38:15.728 "name": "7f6ae11c-5ed8-4579-a861-d3196ac9cdbe", 00:38:15.728 "aliases": [ 00:38:15.728 "lvs/nvme0n1p0" 00:38:15.728 ], 00:38:15.728 "product_name": "Logical Volume", 00:38:15.728 "block_size": 4096, 00:38:15.728 "num_blocks": 26476544, 00:38:15.728 "uuid": "7f6ae11c-5ed8-4579-a861-d3196ac9cdbe", 00:38:15.728 "assigned_rate_limits": { 00:38:15.728 "rw_ios_per_sec": 0, 00:38:15.728 "rw_mbytes_per_sec": 0, 00:38:15.728 "r_mbytes_per_sec": 0, 00:38:15.728 "w_mbytes_per_sec": 0 00:38:15.728 }, 00:38:15.728 "claimed": false, 00:38:15.728 "zoned": false, 00:38:15.728 "supported_io_types": { 00:38:15.728 "read": true, 00:38:15.728 "write": true, 00:38:15.728 "unmap": true, 00:38:15.728 "flush": false, 00:38:15.728 "reset": true, 00:38:15.728 "nvme_admin": false, 00:38:15.728 "nvme_io": false, 00:38:15.728 "nvme_io_md": false, 00:38:15.728 "write_zeroes": true, 00:38:15.728 "zcopy": false, 00:38:15.728 "get_zone_info": false, 00:38:15.728 "zone_management": false, 00:38:15.728 "zone_append": false, 00:38:15.728 "compare": false, 00:38:15.728 "compare_and_write": false, 00:38:15.728 "abort": false, 00:38:15.728 "seek_hole": true, 00:38:15.728 "seek_data": true, 00:38:15.728 "copy": false, 00:38:15.728 "nvme_iov_md": false 00:38:15.728 }, 00:38:15.728 "driver_specific": { 00:38:15.728 "lvol": { 00:38:15.728 "lvol_store_uuid": "ce8528b3-773e-42f2-8d49-ed35d174ad86", 00:38:15.728 "base_bdev": "nvme0n1", 00:38:15.728 "thin_provision": true, 00:38:15.728 "num_allocated_clusters": 0, 00:38:15.728 "snapshot": false, 00:38:15.728 "clone": false, 00:38:15.728 "esnap_clone": false 00:38:15.728 } 00:38:15.728 } 00:38:15.728 } 00:38:15.728 ]' 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7f6ae11c-5ed8-4579-a861-d3196ac9cdbe --l2p_dram_limit 10' 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:38:15.728 12:40:25 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7f6ae11c-5ed8-4579-a861-d3196ac9cdbe --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:38:15.987 [2024-07-10 12:40:25.394439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.988 [2024-07-10 12:40:25.394507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:15.988 [2024-07-10 12:40:25.394527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:15.988 [2024-07-10 12:40:25.394544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.988 [2024-07-10 12:40:25.394623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.988 [2024-07-10 12:40:25.394641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:15.988 [2024-07-10 12:40:25.394655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:38:15.988 [2024-07-10 12:40:25.394670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.988 [2024-07-10 12:40:25.394696] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:15.988 [2024-07-10 12:40:25.395799] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:15.988 [2024-07-10 12:40:25.395834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.988 [2024-07-10 12:40:25.395855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:15.988 [2024-07-10 12:40:25.395870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.146 ms 00:38:15.988 [2024-07-10 12:40:25.395884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.988 [2024-07-10 12:40:25.396297] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c69e99e1-d5e8-4abd-adc9-da7bdc7f3597 00:38:15.988 [2024-07-10 12:40:25.398829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.988 [2024-07-10 12:40:25.398927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:38:15.988 [2024-07-10 12:40:25.398979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:38:15.988 [2024-07-10 12:40:25.399013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.988 [2024-07-10 12:40:25.411848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.988 [2024-07-10 12:40:25.411920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:15.988 [2024-07-10 12:40:25.411960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.661 ms 00:38:15.988 [2024-07-10 12:40:25.411982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.988 [2024-07-10 12:40:25.412207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.988 [2024-07-10 12:40:25.412236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:15.988 [2024-07-10 12:40:25.412263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:38:15.988 [2024-07-10 12:40:25.412285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.988 [2024-07-10 12:40:25.412422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.988 [2024-07-10 12:40:25.412447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:15.988 [2024-07-10 12:40:25.412473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:38:15.988 [2024-07-10 12:40:25.412499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.988 [2024-07-10 12:40:25.412552] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:15.988 [2024-07-10 12:40:25.423022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.988 [2024-07-10 12:40:25.423070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:15.988 [2024-07-10 12:40:25.423088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.505 ms 00:38:15.988 [2024-07-10 12:40:25.423107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.988 [2024-07-10 12:40:25.423157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.988 [2024-07-10 12:40:25.423175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:15.988 [2024-07-10 12:40:25.423189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:38:15.988 [2024-07-10 12:40:25.423206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.988 [2024-07-10 12:40:25.423275] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:38:15.988 [2024-07-10 12:40:25.423458] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:15.988 [2024-07-10 12:40:25.423477] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:15.988 [2024-07-10 12:40:25.423505] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:38:15.988 [2024-07-10 12:40:25.423523] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:15.988 [2024-07-10 12:40:25.423543] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:15.988 [2024-07-10 12:40:25.423558] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:38:15.988 [2024-07-10 12:40:25.423575] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:15.988 [2024-07-10 12:40:25.423592] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:15.988 [2024-07-10 12:40:25.423611] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:15.988 [2024-07-10 12:40:25.423625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.988 [2024-07-10 12:40:25.423642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:15.988 [2024-07-10 12:40:25.423656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:38:15.988 [2024-07-10 12:40:25.423673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.988 [2024-07-10 12:40:25.423791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.988 [2024-07-10 12:40:25.423811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:15.988 [2024-07-10 12:40:25.423825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:38:15.988 [2024-07-10 12:40:25.423856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.988 [2024-07-10 12:40:25.423981] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:15.988 [2024-07-10 12:40:25.424005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:15.988 [2024-07-10 12:40:25.424033] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:15.988 [2024-07-10 12:40:25.424051] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:15.988 [2024-07-10 12:40:25.424066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:15.988 [2024-07-10 12:40:25.424081] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:15.988 [2024-07-10 12:40:25.424094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:38:15.988 [2024-07-10 12:40:25.424110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:15.988 [2024-07-10 12:40:25.424123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:38:15.988 [2024-07-10 12:40:25.424148] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:15.988 [2024-07-10 12:40:25.424160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:15.988 [2024-07-10 12:40:25.424176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:38:15.988 [2024-07-10 12:40:25.424189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:15.988 [2024-07-10 12:40:25.424206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:15.988 [2024-07-10 12:40:25.424219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:38:15.988 [2024-07-10 12:40:25.424235] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:15.988 [2024-07-10 12:40:25.424248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:15.988 [2024-07-10 12:40:25.424268] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:38:15.988 [2024-07-10 12:40:25.424280] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:15.988 [2024-07-10 12:40:25.424296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:15.988 [2024-07-10 12:40:25.424309] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:38:15.988 [2024-07-10 12:40:25.424325] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:15.988 [2024-07-10 12:40:25.424338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:15.988 [2024-07-10 12:40:25.424353] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:38:15.988 [2024-07-10 12:40:25.424366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:15.988 [2024-07-10 12:40:25.424381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:15.988 [2024-07-10 12:40:25.424393] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:38:15.988 [2024-07-10 12:40:25.424408] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:15.988 [2024-07-10 12:40:25.424421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:15.988 [2024-07-10 12:40:25.424437] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:38:15.988 [2024-07-10 12:40:25.424449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:15.988 [2024-07-10 12:40:25.424465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:15.988 [2024-07-10 12:40:25.424477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:38:15.988 [2024-07-10 12:40:25.424495] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:15.988 [2024-07-10 12:40:25.424508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:15.988 [2024-07-10 12:40:25.424524] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:38:15.988 [2024-07-10 12:40:25.424537] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:15.988 [2024-07-10 12:40:25.424552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:15.988 [2024-07-10 12:40:25.424565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:38:15.988 [2024-07-10 12:40:25.424582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:15.988 [2024-07-10 12:40:25.424595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:15.988 [2024-07-10 12:40:25.424610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:38:15.988 [2024-07-10 12:40:25.424623] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:15.988 [2024-07-10 12:40:25.424639] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:15.988 [2024-07-10 12:40:25.424653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:15.988 [2024-07-10 12:40:25.424669] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:15.988 [2024-07-10 12:40:25.424683] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:15.988 [2024-07-10 12:40:25.424700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:15.988 [2024-07-10 12:40:25.424714] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:15.988 [2024-07-10 12:40:25.424745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:15.988 [2024-07-10 12:40:25.424759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:15.988 [2024-07-10 12:40:25.424775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:15.988 [2024-07-10 12:40:25.424788] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:15.988 [2024-07-10 12:40:25.424809] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:15.988 [2024-07-10 12:40:25.424826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:15.989 [2024-07-10 12:40:25.424849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:38:15.989 [2024-07-10 12:40:25.424863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:38:15.989 [2024-07-10 12:40:25.424881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:38:15.989 [2024-07-10 12:40:25.424895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:38:15.989 [2024-07-10 12:40:25.424913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:38:15.989 [2024-07-10 12:40:25.424927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:38:15.989 [2024-07-10 12:40:25.424944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:38:15.989 [2024-07-10 12:40:25.424958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:38:15.989 [2024-07-10 12:40:25.424977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:38:15.989 [2024-07-10 12:40:25.424991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:38:15.989 [2024-07-10 12:40:25.425011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:38:15.989 [2024-07-10 12:40:25.425025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:38:15.989 [2024-07-10 12:40:25.425042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:38:15.989 [2024-07-10 12:40:25.425056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:38:15.989 [2024-07-10 12:40:25.425074] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:15.989 [2024-07-10 12:40:25.425088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:15.989 [2024-07-10 12:40:25.425107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:15.989 [2024-07-10 12:40:25.425121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:15.989 [2024-07-10 12:40:25.425138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:15.989 [2024-07-10 12:40:25.425152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:15.989 [2024-07-10 12:40:25.425171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:15.989 [2024-07-10 12:40:25.425185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:15.989 [2024-07-10 12:40:25.425203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:38:15.989 [2024-07-10 12:40:25.425216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:15.989 [2024-07-10 12:40:25.425278] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:38:15.989 [2024-07-10 12:40:25.425297] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:38:20.181 [2024-07-10 12:40:28.860724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:28.860807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:38:20.181 [2024-07-10 12:40:28.860830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3441.010 ms 00:38:20.181 [2024-07-10 12:40:28.860841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:28.911134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:28.911199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:20.181 [2024-07-10 12:40:28.911221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.017 ms 00:38:20.181 [2024-07-10 12:40:28.911233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:28.911418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:28.911433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:20.181 [2024-07-10 12:40:28.911448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:38:20.181 [2024-07-10 12:40:28.911464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:28.965368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:28.965431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:20.181 [2024-07-10 12:40:28.965451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.937 ms 00:38:20.181 [2024-07-10 12:40:28.965463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:28.965530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:28.965557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:20.181 [2024-07-10 12:40:28.965594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:20.181 [2024-07-10 12:40:28.965609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:28.966723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:28.966873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:20.181 [2024-07-10 12:40:28.966975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:38:20.181 [2024-07-10 12:40:28.966996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:28.967192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:28.967210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:20.181 [2024-07-10 12:40:28.967233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:38:20.181 [2024-07-10 12:40:28.967249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:28.992541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:28.992608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:20.181 [2024-07-10 12:40:28.992633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.297 ms 00:38:20.181 [2024-07-10 12:40:28.992650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:29.008423] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:38:20.181 [2024-07-10 12:40:29.012464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:29.012505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:20.181 [2024-07-10 12:40:29.012523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.687 ms 00:38:20.181 [2024-07-10 12:40:29.012537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:29.120389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:29.120469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:38:20.181 [2024-07-10 12:40:29.120489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.974 ms 00:38:20.181 [2024-07-10 12:40:29.120504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:29.120731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:29.120768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:20.181 [2024-07-10 12:40:29.120798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:38:20.181 [2024-07-10 12:40:29.120816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:29.161771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:29.161843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:38:20.181 [2024-07-10 12:40:29.161861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.945 ms 00:38:20.181 [2024-07-10 12:40:29.161875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:29.202639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:29.202708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:38:20.181 [2024-07-10 12:40:29.202740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.771 ms 00:38:20.181 [2024-07-10 12:40:29.202755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:29.203510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:29.203541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:20.181 [2024-07-10 12:40:29.203554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:38:20.181 [2024-07-10 12:40:29.203571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:29.319269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:29.319370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:38:20.181 [2024-07-10 12:40:29.319392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.817 ms 00:38:20.181 [2024-07-10 12:40:29.319410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:29.360095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:29.360174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:38:20.181 [2024-07-10 12:40:29.360193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.690 ms 00:38:20.181 [2024-07-10 12:40:29.360208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.181 [2024-07-10 12:40:29.404126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.181 [2024-07-10 12:40:29.404214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:38:20.182 [2024-07-10 12:40:29.404232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.931 ms 00:38:20.182 [2024-07-10 12:40:29.404245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.182 [2024-07-10 12:40:29.445138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.182 [2024-07-10 12:40:29.445211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:20.182 [2024-07-10 12:40:29.445230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.902 ms 00:38:20.182 [2024-07-10 12:40:29.445244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.182 [2024-07-10 12:40:29.445325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.182 [2024-07-10 12:40:29.445342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:20.182 [2024-07-10 12:40:29.445355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:38:20.182 [2024-07-10 12:40:29.445373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.182 [2024-07-10 12:40:29.445485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.182 [2024-07-10 12:40:29.445502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:20.182 [2024-07-10 12:40:29.445517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:38:20.182 [2024-07-10 12:40:29.445530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.182 [2024-07-10 12:40:29.446672] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4058.353 ms, result 0 00:38:20.182 { 00:38:20.182 "name": "ftl0", 00:38:20.182 "uuid": "c69e99e1-d5e8-4abd-adc9-da7bdc7f3597" 00:38:20.182 } 00:38:20.182 12:40:29 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:38:20.182 12:40:29 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:38:20.441 12:40:29 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:38:20.441 12:40:29 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:38:20.441 [2024-07-10 12:40:29.889250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.441 [2024-07-10 12:40:29.889322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:20.441 [2024-07-10 12:40:29.889343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:20.441 [2024-07-10 12:40:29.889354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.441 [2024-07-10 12:40:29.889386] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:20.441 [2024-07-10 12:40:29.893811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.441 [2024-07-10 12:40:29.893860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:20.441 [2024-07-10 12:40:29.893874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.412 ms 00:38:20.441 [2024-07-10 12:40:29.893889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.441 [2024-07-10 12:40:29.894168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.441 [2024-07-10 12:40:29.894194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:20.441 [2024-07-10 12:40:29.894223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:38:20.441 [2024-07-10 12:40:29.894253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.441 [2024-07-10 12:40:29.897200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.441 [2024-07-10 12:40:29.897369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:20.441 [2024-07-10 12:40:29.897476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.927 ms 00:38:20.441 [2024-07-10 12:40:29.897530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.441 [2024-07-10 12:40:29.903212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.441 [2024-07-10 12:40:29.903370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:20.441 [2024-07-10 12:40:29.903463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.623 ms 00:38:20.441 [2024-07-10 12:40:29.903504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.701 [2024-07-10 12:40:29.944743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.701 [2024-07-10 12:40:29.945071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:20.701 [2024-07-10 12:40:29.945237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.196 ms 00:38:20.701 [2024-07-10 12:40:29.945292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.701 [2024-07-10 12:40:29.969652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.701 [2024-07-10 12:40:29.969854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:20.701 [2024-07-10 12:40:29.969882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.303 ms 00:38:20.701 [2024-07-10 12:40:29.969897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.701 [2024-07-10 12:40:29.970066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.701 [2024-07-10 12:40:29.970083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:20.701 [2024-07-10 12:40:29.970095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:38:20.701 [2024-07-10 12:40:29.970108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.701 [2024-07-10 12:40:30.009112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.702 [2024-07-10 12:40:30.009319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:38:20.702 [2024-07-10 12:40:30.009457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.044 ms 00:38:20.702 [2024-07-10 12:40:30.009499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.702 [2024-07-10 12:40:30.049705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.702 [2024-07-10 12:40:30.050019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:38:20.702 [2024-07-10 12:40:30.050116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.200 ms 00:38:20.702 [2024-07-10 12:40:30.050160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.702 [2024-07-10 12:40:30.089534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.702 [2024-07-10 12:40:30.089808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:20.702 [2024-07-10 12:40:30.089839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.355 ms 00:38:20.702 [2024-07-10 12:40:30.089852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.702 [2024-07-10 12:40:30.126571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.702 [2024-07-10 12:40:30.126628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:20.702 [2024-07-10 12:40:30.126645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.644 ms 00:38:20.702 [2024-07-10 12:40:30.126658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.702 [2024-07-10 12:40:30.126705] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:20.702 [2024-07-10 12:40:30.126726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.126998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.127997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.128010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.128021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.128034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:20.702 [2024-07-10 12:40:30.128045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:20.703 [2024-07-10 12:40:30.128299] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:20.703 [2024-07-10 12:40:30.128313] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c69e99e1-d5e8-4abd-adc9-da7bdc7f3597 00:38:20.703 [2024-07-10 12:40:30.128326] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:38:20.703 [2024-07-10 12:40:30.128336] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:20.703 [2024-07-10 12:40:30.128350] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:20.703 [2024-07-10 12:40:30.128361] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:20.703 [2024-07-10 12:40:30.128373] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:20.703 [2024-07-10 12:40:30.128384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:20.703 [2024-07-10 12:40:30.128397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:20.703 [2024-07-10 12:40:30.128406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:20.703 [2024-07-10 12:40:30.128418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:20.703 [2024-07-10 12:40:30.128428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.703 [2024-07-10 12:40:30.128441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:20.703 [2024-07-10 12:40:30.128452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:38:20.703 [2024-07-10 12:40:30.128464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.703 [2024-07-10 12:40:30.147441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.703 [2024-07-10 12:40:30.147493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:20.703 [2024-07-10 12:40:30.147508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.946 ms 00:38:20.703 [2024-07-10 12:40:30.147522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.703 [2024-07-10 12:40:30.148012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.703 [2024-07-10 12:40:30.148052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:20.703 [2024-07-10 12:40:30.148073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:38:20.703 [2024-07-10 12:40:30.148090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.962 [2024-07-10 12:40:30.210586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.962 [2024-07-10 12:40:30.210655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:20.962 [2024-07-10 12:40:30.210673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.962 [2024-07-10 12:40:30.210687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.962 [2024-07-10 12:40:30.210778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.962 [2024-07-10 12:40:30.210794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:20.962 [2024-07-10 12:40:30.210805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.962 [2024-07-10 12:40:30.210821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.962 [2024-07-10 12:40:30.210925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.962 [2024-07-10 12:40:30.210944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:20.962 [2024-07-10 12:40:30.210955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.962 [2024-07-10 12:40:30.210968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.962 [2024-07-10 12:40:30.210989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.962 [2024-07-10 12:40:30.211006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:20.962 [2024-07-10 12:40:30.211016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.962 [2024-07-10 12:40:30.211028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.962 [2024-07-10 12:40:30.333130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.962 [2024-07-10 12:40:30.333206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:20.962 [2024-07-10 12:40:30.333223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.962 [2024-07-10 12:40:30.333237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.962 [2024-07-10 12:40:30.433511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.962 [2024-07-10 12:40:30.433584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:20.962 [2024-07-10 12:40:30.433601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.962 [2024-07-10 12:40:30.433618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.962 [2024-07-10 12:40:30.433725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.962 [2024-07-10 12:40:30.433762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:20.962 [2024-07-10 12:40:30.433774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.962 [2024-07-10 12:40:30.433788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.962 [2024-07-10 12:40:30.433841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.962 [2024-07-10 12:40:30.433859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:20.962 [2024-07-10 12:40:30.433869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.962 [2024-07-10 12:40:30.433883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.962 [2024-07-10 12:40:30.433998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.962 [2024-07-10 12:40:30.434014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:20.962 [2024-07-10 12:40:30.434025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.962 [2024-07-10 12:40:30.434039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.962 [2024-07-10 12:40:30.434083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.962 [2024-07-10 12:40:30.434099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:20.962 [2024-07-10 12:40:30.434109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.962 [2024-07-10 12:40:30.434121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.962 [2024-07-10 12:40:30.434166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.962 [2024-07-10 12:40:30.434179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:20.962 [2024-07-10 12:40:30.434190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.962 [2024-07-10 12:40:30.434203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.962 [2024-07-10 12:40:30.434248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.962 [2024-07-10 12:40:30.434932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:20.962 [2024-07-10 12:40:30.434947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.962 [2024-07-10 12:40:30.434961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.962 [2024-07-10 12:40:30.435101] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 546.703 ms, result 0 00:38:20.962 true 00:38:21.221 12:40:30 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 87109 00:38:21.221 12:40:30 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 87109 ']' 00:38:21.221 12:40:30 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 87109 00:38:21.221 12:40:30 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # uname 00:38:21.221 12:40:30 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:21.221 12:40:30 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87109 00:38:21.221 12:40:30 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:21.221 12:40:30 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:21.221 12:40:30 ftl.ftl_restore_fast -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87109' 00:38:21.221 killing process with pid 87109 00:38:21.221 12:40:30 ftl.ftl_restore_fast -- common/autotest_common.sh@967 -- # kill 87109 00:38:21.221 12:40:30 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # wait 87109 00:38:26.486 12:40:35 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:38:30.672 262144+0 records in 00:38:30.672 262144+0 records out 00:38:30.672 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.91838 s, 274 MB/s 00:38:30.672 12:40:39 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:38:32.052 12:40:41 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:32.052 [2024-07-10 12:40:41.470380] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:38:32.052 [2024-07-10 12:40:41.470540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87345 ] 00:38:32.322 [2024-07-10 12:40:41.646609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.580 [2024-07-10 12:40:41.889211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.943 [2024-07-10 12:40:42.288194] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:32.943 [2024-07-10 12:40:42.288272] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:33.204 [2024-07-10 12:40:42.451925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.204 [2024-07-10 12:40:42.451996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:33.204 [2024-07-10 12:40:42.452013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:33.204 [2024-07-10 12:40:42.452024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.204 [2024-07-10 12:40:42.452093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.204 [2024-07-10 12:40:42.452106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:33.204 [2024-07-10 12:40:42.452118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:38:33.204 [2024-07-10 12:40:42.452132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.204 [2024-07-10 12:40:42.452164] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:33.204 [2024-07-10 12:40:42.453365] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:33.204 [2024-07-10 12:40:42.453395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.204 [2024-07-10 12:40:42.453411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:33.204 [2024-07-10 12:40:42.453422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:38:33.204 [2024-07-10 12:40:42.453433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.204 [2024-07-10 12:40:42.455254] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:38:33.204 [2024-07-10 12:40:42.476206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.204 [2024-07-10 12:40:42.476269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:38:33.204 [2024-07-10 12:40:42.476287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.984 ms 00:38:33.204 [2024-07-10 12:40:42.476299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.204 [2024-07-10 12:40:42.476394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.204 [2024-07-10 12:40:42.476408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:38:33.204 [2024-07-10 12:40:42.476424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:38:33.204 [2024-07-10 12:40:42.476435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.204 [2024-07-10 12:40:42.484317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.204 [2024-07-10 12:40:42.484358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:33.204 [2024-07-10 12:40:42.484372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.802 ms 00:38:33.204 [2024-07-10 12:40:42.484384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.204 [2024-07-10 12:40:42.484488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.204 [2024-07-10 12:40:42.484507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:33.204 [2024-07-10 12:40:42.484519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:38:33.204 [2024-07-10 12:40:42.484530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.204 [2024-07-10 12:40:42.484584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.204 [2024-07-10 12:40:42.484596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:33.204 [2024-07-10 12:40:42.484608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:38:33.204 [2024-07-10 12:40:42.484619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.204 [2024-07-10 12:40:42.484650] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:33.204 [2024-07-10 12:40:42.490214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.204 [2024-07-10 12:40:42.490249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:33.204 [2024-07-10 12:40:42.490262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.581 ms 00:38:33.204 [2024-07-10 12:40:42.490272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.204 [2024-07-10 12:40:42.490314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.204 [2024-07-10 12:40:42.490325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:33.204 [2024-07-10 12:40:42.490336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:38:33.204 [2024-07-10 12:40:42.490346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.204 [2024-07-10 12:40:42.490407] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:38:33.204 [2024-07-10 12:40:42.490433] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:38:33.204 [2024-07-10 12:40:42.490470] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:38:33.204 [2024-07-10 12:40:42.490491] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:38:33.204 [2024-07-10 12:40:42.490577] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:33.204 [2024-07-10 12:40:42.490591] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:33.204 [2024-07-10 12:40:42.490605] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:38:33.204 [2024-07-10 12:40:42.490618] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:33.204 [2024-07-10 12:40:42.490630] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:33.204 [2024-07-10 12:40:42.490641] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:38:33.204 [2024-07-10 12:40:42.490651] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:33.204 [2024-07-10 12:40:42.490662] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:33.204 [2024-07-10 12:40:42.490672] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:33.204 [2024-07-10 12:40:42.490683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.204 [2024-07-10 12:40:42.490697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:33.204 [2024-07-10 12:40:42.490708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:38:33.204 [2024-07-10 12:40:42.490718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.204 [2024-07-10 12:40:42.490809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.204 [2024-07-10 12:40:42.490821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:33.204 [2024-07-10 12:40:42.490831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:38:33.204 [2024-07-10 12:40:42.490841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.204 [2024-07-10 12:40:42.490929] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:33.204 [2024-07-10 12:40:42.490943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:33.204 [2024-07-10 12:40:42.490958] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:33.204 [2024-07-10 12:40:42.490969] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:33.204 [2024-07-10 12:40:42.490980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:33.204 [2024-07-10 12:40:42.490989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:33.204 [2024-07-10 12:40:42.490999] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:38:33.204 [2024-07-10 12:40:42.491008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:33.204 [2024-07-10 12:40:42.491018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:38:33.204 [2024-07-10 12:40:42.491027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:33.204 [2024-07-10 12:40:42.491039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:33.204 [2024-07-10 12:40:42.491048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:38:33.204 [2024-07-10 12:40:42.491058] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:33.204 [2024-07-10 12:40:42.491067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:33.205 [2024-07-10 12:40:42.491076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:38:33.205 [2024-07-10 12:40:42.491085] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:33.205 [2024-07-10 12:40:42.491095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:33.205 [2024-07-10 12:40:42.491104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:38:33.205 [2024-07-10 12:40:42.491114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:33.205 [2024-07-10 12:40:42.491123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:33.205 [2024-07-10 12:40:42.491145] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:38:33.205 [2024-07-10 12:40:42.491154] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:33.205 [2024-07-10 12:40:42.491163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:33.205 [2024-07-10 12:40:42.491173] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:38:33.205 [2024-07-10 12:40:42.491182] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:33.205 [2024-07-10 12:40:42.491191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:33.205 [2024-07-10 12:40:42.491200] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:38:33.205 [2024-07-10 12:40:42.491209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:33.205 [2024-07-10 12:40:42.491219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:33.205 [2024-07-10 12:40:42.491228] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:38:33.205 [2024-07-10 12:40:42.491236] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:33.205 [2024-07-10 12:40:42.491245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:33.205 [2024-07-10 12:40:42.491254] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:38:33.205 [2024-07-10 12:40:42.491263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:33.205 [2024-07-10 12:40:42.491272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:33.205 [2024-07-10 12:40:42.491281] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:38:33.205 [2024-07-10 12:40:42.491290] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:33.205 [2024-07-10 12:40:42.491299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:33.205 [2024-07-10 12:40:42.491308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:38:33.205 [2024-07-10 12:40:42.491317] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:33.205 [2024-07-10 12:40:42.491326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:33.205 [2024-07-10 12:40:42.491335] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:38:33.205 [2024-07-10 12:40:42.491345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:33.205 [2024-07-10 12:40:42.491354] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:33.205 [2024-07-10 12:40:42.491364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:33.205 [2024-07-10 12:40:42.491374] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:33.205 [2024-07-10 12:40:42.491383] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:33.205 [2024-07-10 12:40:42.491392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:33.205 [2024-07-10 12:40:42.491401] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:33.205 [2024-07-10 12:40:42.491410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:33.205 [2024-07-10 12:40:42.491420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:33.205 [2024-07-10 12:40:42.491428] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:33.205 [2024-07-10 12:40:42.491437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:33.205 [2024-07-10 12:40:42.491447] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:33.205 [2024-07-10 12:40:42.491460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:33.205 [2024-07-10 12:40:42.491471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:38:33.205 [2024-07-10 12:40:42.491482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:38:33.205 [2024-07-10 12:40:42.491492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:38:33.205 [2024-07-10 12:40:42.491502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:38:33.205 [2024-07-10 12:40:42.491512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:38:33.205 [2024-07-10 12:40:42.491522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:38:33.205 [2024-07-10 12:40:42.491532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:38:33.205 [2024-07-10 12:40:42.491542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:38:33.205 [2024-07-10 12:40:42.491551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:38:33.205 [2024-07-10 12:40:42.491576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:38:33.205 [2024-07-10 12:40:42.491587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:38:33.205 [2024-07-10 12:40:42.491598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:38:33.205 [2024-07-10 12:40:42.491608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:38:33.205 [2024-07-10 12:40:42.491618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:38:33.205 [2024-07-10 12:40:42.491629] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:33.205 [2024-07-10 12:40:42.491640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:33.205 [2024-07-10 12:40:42.491652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:33.205 [2024-07-10 12:40:42.491663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:33.205 [2024-07-10 12:40:42.491673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:33.205 [2024-07-10 12:40:42.491684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:33.205 [2024-07-10 12:40:42.491695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.205 [2024-07-10 12:40:42.491711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:33.205 [2024-07-10 12:40:42.491721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:38:33.205 [2024-07-10 12:40:42.491741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.205 [2024-07-10 12:40:42.546275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.205 [2024-07-10 12:40:42.546340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:33.205 [2024-07-10 12:40:42.546357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.566 ms 00:38:33.205 [2024-07-10 12:40:42.546368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.205 [2024-07-10 12:40:42.546480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.205 [2024-07-10 12:40:42.546492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:33.205 [2024-07-10 12:40:42.546503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:38:33.205 [2024-07-10 12:40:42.546513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.205 [2024-07-10 12:40:42.596987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.205 [2024-07-10 12:40:42.597046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:33.205 [2024-07-10 12:40:42.597063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.473 ms 00:38:33.205 [2024-07-10 12:40:42.597073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.205 [2024-07-10 12:40:42.597140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.205 [2024-07-10 12:40:42.597151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:33.205 [2024-07-10 12:40:42.597163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:38:33.205 [2024-07-10 12:40:42.597173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.205 [2024-07-10 12:40:42.597673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.205 [2024-07-10 12:40:42.597688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:33.205 [2024-07-10 12:40:42.597700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:38:33.205 [2024-07-10 12:40:42.597710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.205 [2024-07-10 12:40:42.597856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.205 [2024-07-10 12:40:42.597870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:33.205 [2024-07-10 12:40:42.597881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:38:33.205 [2024-07-10 12:40:42.597891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.205 [2024-07-10 12:40:42.617431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.205 [2024-07-10 12:40:42.617483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:33.205 [2024-07-10 12:40:42.617499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.546 ms 00:38:33.205 [2024-07-10 12:40:42.617510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.205 [2024-07-10 12:40:42.637267] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:38:33.205 [2024-07-10 12:40:42.637319] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:38:33.205 [2024-07-10 12:40:42.637340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.205 [2024-07-10 12:40:42.637352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:38:33.205 [2024-07-10 12:40:42.637366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.709 ms 00:38:33.205 [2024-07-10 12:40:42.637375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.205 [2024-07-10 12:40:42.666773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.205 [2024-07-10 12:40:42.666852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:38:33.205 [2024-07-10 12:40:42.666870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.394 ms 00:38:33.205 [2024-07-10 12:40:42.666882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.465 [2024-07-10 12:40:42.686288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.465 [2024-07-10 12:40:42.686333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:38:33.465 [2024-07-10 12:40:42.686347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.362 ms 00:38:33.465 [2024-07-10 12:40:42.686357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.465 [2024-07-10 12:40:42.704990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.465 [2024-07-10 12:40:42.705047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:38:33.465 [2024-07-10 12:40:42.705063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.618 ms 00:38:33.465 [2024-07-10 12:40:42.705074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.465 [2024-07-10 12:40:42.705909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.465 [2024-07-10 12:40:42.705936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:33.465 [2024-07-10 12:40:42.705949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:38:33.465 [2024-07-10 12:40:42.705959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.465 [2024-07-10 12:40:42.794750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.465 [2024-07-10 12:40:42.794828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:38:33.465 [2024-07-10 12:40:42.794846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.909 ms 00:38:33.465 [2024-07-10 12:40:42.794858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.465 [2024-07-10 12:40:42.809441] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:38:33.465 [2024-07-10 12:40:42.813151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.465 [2024-07-10 12:40:42.813196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:33.465 [2024-07-10 12:40:42.813212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.252 ms 00:38:33.465 [2024-07-10 12:40:42.813224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.465 [2024-07-10 12:40:42.813349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.465 [2024-07-10 12:40:42.813362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:38:33.465 [2024-07-10 12:40:42.813374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:38:33.465 [2024-07-10 12:40:42.813384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.465 [2024-07-10 12:40:42.813463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.465 [2024-07-10 12:40:42.813480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:33.465 [2024-07-10 12:40:42.813491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:38:33.465 [2024-07-10 12:40:42.813502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.465 [2024-07-10 12:40:42.813523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.465 [2024-07-10 12:40:42.813535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:33.465 [2024-07-10 12:40:42.813545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:33.465 [2024-07-10 12:40:42.813556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.465 [2024-07-10 12:40:42.813591] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:38:33.465 [2024-07-10 12:40:42.813604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.465 [2024-07-10 12:40:42.813614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:38:33.465 [2024-07-10 12:40:42.813629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:38:33.465 [2024-07-10 12:40:42.813639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.465 [2024-07-10 12:40:42.850192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.465 [2024-07-10 12:40:42.850242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:33.465 [2024-07-10 12:40:42.850258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.591 ms 00:38:33.465 [2024-07-10 12:40:42.850269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.465 [2024-07-10 12:40:42.850347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:33.466 [2024-07-10 12:40:42.850369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:33.466 [2024-07-10 12:40:42.850380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:38:33.466 [2024-07-10 12:40:42.850391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:33.466 [2024-07-10 12:40:42.851530] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 399.765 ms, result 0 00:39:08.290  Copying: 28/1024 [MB] (28 MBps) Copying: 57/1024 [MB] (29 MBps) Copying: 86/1024 [MB] (28 MBps) Copying: 113/1024 [MB] (27 MBps) Copying: 145/1024 [MB] (31 MBps) Copying: 177/1024 [MB] (31 MBps) Copying: 207/1024 [MB] (30 MBps) Copying: 238/1024 [MB] (30 MBps) Copying: 268/1024 [MB] (29 MBps) Copying: 292/1024 [MB] (24 MBps) Copying: 320/1024 [MB] (27 MBps) Copying: 347/1024 [MB] (27 MBps) Copying: 374/1024 [MB] (26 MBps) Copying: 400/1024 [MB] (26 MBps) Copying: 429/1024 [MB] (29 MBps) Copying: 457/1024 [MB] (27 MBps) Copying: 484/1024 [MB] (27 MBps) Copying: 520/1024 [MB] (35 MBps) Copying: 550/1024 [MB] (29 MBps) Copying: 577/1024 [MB] (27 MBps) Copying: 604/1024 [MB] (26 MBps) Copying: 633/1024 [MB] (28 MBps) Copying: 661/1024 [MB] (27 MBps) Copying: 687/1024 [MB] (25 MBps) Copying: 713/1024 [MB] (26 MBps) Copying: 743/1024 [MB] (30 MBps) Copying: 771/1024 [MB] (28 MBps) Copying: 801/1024 [MB] (29 MBps) Copying: 828/1024 [MB] (27 MBps) Copying: 857/1024 [MB] (29 MBps) Copying: 888/1024 [MB] (30 MBps) Copying: 923/1024 [MB] (35 MBps) Copying: 961/1024 [MB] (38 MBps) Copying: 997/1024 [MB] (35 MBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-10 12:41:17.676122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.290 [2024-07-10 12:41:17.676198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:08.290 [2024-07-10 12:41:17.676217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:08.290 [2024-07-10 12:41:17.676227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.290 [2024-07-10 12:41:17.676249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:08.290 [2024-07-10 12:41:17.680245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.290 [2024-07-10 12:41:17.680282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:08.290 [2024-07-10 12:41:17.680296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.984 ms 00:39:08.290 [2024-07-10 12:41:17.680306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.290 [2024-07-10 12:41:17.682173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.290 [2024-07-10 12:41:17.682222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:08.290 [2024-07-10 12:41:17.682235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.843 ms 00:39:08.290 [2024-07-10 12:41:17.682245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.290 [2024-07-10 12:41:17.682272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.290 [2024-07-10 12:41:17.682284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:39:08.290 [2024-07-10 12:41:17.682296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:08.290 [2024-07-10 12:41:17.682305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.290 [2024-07-10 12:41:17.682350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.290 [2024-07-10 12:41:17.682361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:39:08.290 [2024-07-10 12:41:17.682374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:39:08.290 [2024-07-10 12:41:17.682384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.290 [2024-07-10 12:41:17.682399] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:08.290 [2024-07-10 12:41:17.682414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.682992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:08.290 [2024-07-10 12:41:17.683295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:08.291 [2024-07-10 12:41:17.683726] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:08.291 [2024-07-10 12:41:17.683746] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c69e99e1-d5e8-4abd-adc9-da7bdc7f3597 00:39:08.291 [2024-07-10 12:41:17.683757] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:08.291 [2024-07-10 12:41:17.683766] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:39:08.291 [2024-07-10 12:41:17.683776] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:08.291 [2024-07-10 12:41:17.683787] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:08.291 [2024-07-10 12:41:17.683797] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:08.291 [2024-07-10 12:41:17.683811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:08.291 [2024-07-10 12:41:17.683821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:08.291 [2024-07-10 12:41:17.683831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:08.291 [2024-07-10 12:41:17.683840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:08.291 [2024-07-10 12:41:17.683850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.291 [2024-07-10 12:41:17.683861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:08.291 [2024-07-10 12:41:17.683871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.455 ms 00:39:08.291 [2024-07-10 12:41:17.683881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.291 [2024-07-10 12:41:17.703723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.291 [2024-07-10 12:41:17.703772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:08.291 [2024-07-10 12:41:17.703786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.854 ms 00:39:08.291 [2024-07-10 12:41:17.703802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.291 [2024-07-10 12:41:17.704322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.291 [2024-07-10 12:41:17.704337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:08.291 [2024-07-10 12:41:17.704349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:39:08.291 [2024-07-10 12:41:17.704359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.291 [2024-07-10 12:41:17.748232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:08.291 [2024-07-10 12:41:17.748270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:08.291 [2024-07-10 12:41:17.748288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:08.291 [2024-07-10 12:41:17.748299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.291 [2024-07-10 12:41:17.748356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:08.291 [2024-07-10 12:41:17.748368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:08.291 [2024-07-10 12:41:17.748378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:08.291 [2024-07-10 12:41:17.748388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.291 [2024-07-10 12:41:17.748442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:08.291 [2024-07-10 12:41:17.748455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:08.291 [2024-07-10 12:41:17.748467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:08.291 [2024-07-10 12:41:17.748481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.291 [2024-07-10 12:41:17.748497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:08.291 [2024-07-10 12:41:17.748508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:08.291 [2024-07-10 12:41:17.748519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:08.291 [2024-07-10 12:41:17.748529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.550 [2024-07-10 12:41:17.865929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:08.550 [2024-07-10 12:41:17.865986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:08.550 [2024-07-10 12:41:17.866002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:08.550 [2024-07-10 12:41:17.866020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.550 [2024-07-10 12:41:17.965864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:08.550 [2024-07-10 12:41:17.965927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:08.550 [2024-07-10 12:41:17.965944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:08.550 [2024-07-10 12:41:17.965961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.550 [2024-07-10 12:41:17.966035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:08.550 [2024-07-10 12:41:17.966046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:08.550 [2024-07-10 12:41:17.966059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:08.550 [2024-07-10 12:41:17.966069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.550 [2024-07-10 12:41:17.966111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:08.550 [2024-07-10 12:41:17.966122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:08.550 [2024-07-10 12:41:17.966133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:08.550 [2024-07-10 12:41:17.966143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.550 [2024-07-10 12:41:17.966233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:08.550 [2024-07-10 12:41:17.966247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:08.550 [2024-07-10 12:41:17.966257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:08.550 [2024-07-10 12:41:17.966267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.550 [2024-07-10 12:41:17.966296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:08.550 [2024-07-10 12:41:17.966312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:08.550 [2024-07-10 12:41:17.966323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:08.550 [2024-07-10 12:41:17.966332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.550 [2024-07-10 12:41:17.966371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:08.550 [2024-07-10 12:41:17.966382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:08.550 [2024-07-10 12:41:17.966392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:08.550 [2024-07-10 12:41:17.966403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.550 [2024-07-10 12:41:17.966450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:08.550 [2024-07-10 12:41:17.966473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:08.550 [2024-07-10 12:41:17.966484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:08.550 [2024-07-10 12:41:17.966494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.550 [2024-07-10 12:41:17.966617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 290.933 ms, result 0 00:39:09.925 00:39:09.925 00:39:09.925 12:41:19 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:39:09.925 [2024-07-10 12:41:19.354429] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:39:09.925 [2024-07-10 12:41:19.354562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87718 ] 00:39:10.183 [2024-07-10 12:41:19.527658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.441 [2024-07-10 12:41:19.772934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:10.700 [2024-07-10 12:41:20.156648] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:10.700 [2024-07-10 12:41:20.156742] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:10.987 [2024-07-10 12:41:20.318414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.987 [2024-07-10 12:41:20.318482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:10.987 [2024-07-10 12:41:20.318499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:10.987 [2024-07-10 12:41:20.318510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.987 [2024-07-10 12:41:20.318572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.987 [2024-07-10 12:41:20.318585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:10.987 [2024-07-10 12:41:20.318596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:39:10.987 [2024-07-10 12:41:20.318611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.987 [2024-07-10 12:41:20.318632] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:10.987 [2024-07-10 12:41:20.319899] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:10.987 [2024-07-10 12:41:20.319931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.987 [2024-07-10 12:41:20.319953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:10.987 [2024-07-10 12:41:20.319968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:39:10.987 [2024-07-10 12:41:20.319980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.987 [2024-07-10 12:41:20.320470] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:39:10.987 [2024-07-10 12:41:20.320494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.987 [2024-07-10 12:41:20.320508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:10.987 [2024-07-10 12:41:20.320523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:39:10.987 [2024-07-10 12:41:20.320543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.987 [2024-07-10 12:41:20.320605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.987 [2024-07-10 12:41:20.320616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:10.987 [2024-07-10 12:41:20.320630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:39:10.987 [2024-07-10 12:41:20.320642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.987 [2024-07-10 12:41:20.321141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.987 [2024-07-10 12:41:20.321161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:10.987 [2024-07-10 12:41:20.321175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:39:10.987 [2024-07-10 12:41:20.321194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.987 [2024-07-10 12:41:20.321287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.987 [2024-07-10 12:41:20.321300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:10.987 [2024-07-10 12:41:20.321313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:39:10.987 [2024-07-10 12:41:20.321324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.987 [2024-07-10 12:41:20.321358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.987 [2024-07-10 12:41:20.321369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:10.987 [2024-07-10 12:41:20.321382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:39:10.987 [2024-07-10 12:41:20.321392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.987 [2024-07-10 12:41:20.321424] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:10.987 [2024-07-10 12:41:20.328097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.987 [2024-07-10 12:41:20.328127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:10.987 [2024-07-10 12:41:20.328150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.690 ms 00:39:10.987 [2024-07-10 12:41:20.328160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.987 [2024-07-10 12:41:20.328194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.987 [2024-07-10 12:41:20.328204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:10.987 [2024-07-10 12:41:20.328214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:39:10.987 [2024-07-10 12:41:20.328224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.987 [2024-07-10 12:41:20.328274] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:10.988 [2024-07-10 12:41:20.328301] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:10.988 [2024-07-10 12:41:20.328335] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:10.988 [2024-07-10 12:41:20.328355] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:39:10.988 [2024-07-10 12:41:20.328438] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:10.988 [2024-07-10 12:41:20.328451] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:10.988 [2024-07-10 12:41:20.328463] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:39:10.988 [2024-07-10 12:41:20.328476] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:10.988 [2024-07-10 12:41:20.328487] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:10.988 [2024-07-10 12:41:20.328498] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:39:10.988 [2024-07-10 12:41:20.328507] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:10.988 [2024-07-10 12:41:20.328516] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:10.988 [2024-07-10 12:41:20.328530] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:10.988 [2024-07-10 12:41:20.328540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.988 [2024-07-10 12:41:20.328550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:10.988 [2024-07-10 12:41:20.328560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:39:10.988 [2024-07-10 12:41:20.328569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.988 [2024-07-10 12:41:20.328635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.988 [2024-07-10 12:41:20.328646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:10.988 [2024-07-10 12:41:20.328656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:39:10.988 [2024-07-10 12:41:20.328666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.988 [2024-07-10 12:41:20.328756] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:10.988 [2024-07-10 12:41:20.328770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:10.988 [2024-07-10 12:41:20.328780] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:10.988 [2024-07-10 12:41:20.328791] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:10.988 [2024-07-10 12:41:20.328800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:10.988 [2024-07-10 12:41:20.328809] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:10.988 [2024-07-10 12:41:20.328820] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:39:10.988 [2024-07-10 12:41:20.328830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:10.988 [2024-07-10 12:41:20.328839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:39:10.988 [2024-07-10 12:41:20.328848] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:10.988 [2024-07-10 12:41:20.328857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:10.988 [2024-07-10 12:41:20.328866] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:39:10.988 [2024-07-10 12:41:20.328875] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:10.988 [2024-07-10 12:41:20.328885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:10.988 [2024-07-10 12:41:20.328894] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:39:10.988 [2024-07-10 12:41:20.328903] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:10.988 [2024-07-10 12:41:20.328912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:10.988 [2024-07-10 12:41:20.328921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:39:10.988 [2024-07-10 12:41:20.328930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:10.988 [2024-07-10 12:41:20.328940] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:10.988 [2024-07-10 12:41:20.328949] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:39:10.988 [2024-07-10 12:41:20.328957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:10.988 [2024-07-10 12:41:20.328978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:10.988 [2024-07-10 12:41:20.328987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:39:10.988 [2024-07-10 12:41:20.328997] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:10.988 [2024-07-10 12:41:20.329006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:10.988 [2024-07-10 12:41:20.329015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:39:10.988 [2024-07-10 12:41:20.329024] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:10.988 [2024-07-10 12:41:20.329032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:10.988 [2024-07-10 12:41:20.329042] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:39:10.988 [2024-07-10 12:41:20.329050] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:10.988 [2024-07-10 12:41:20.329060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:10.988 [2024-07-10 12:41:20.329069] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:39:10.988 [2024-07-10 12:41:20.329077] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:10.988 [2024-07-10 12:41:20.329086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:10.988 [2024-07-10 12:41:20.329095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:39:10.988 [2024-07-10 12:41:20.329104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:10.988 [2024-07-10 12:41:20.329113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:10.988 [2024-07-10 12:41:20.329124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:39:10.988 [2024-07-10 12:41:20.329133] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:10.988 [2024-07-10 12:41:20.329143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:10.988 [2024-07-10 12:41:20.329152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:39:10.988 [2024-07-10 12:41:20.329160] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:10.988 [2024-07-10 12:41:20.329169] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:10.988 [2024-07-10 12:41:20.329179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:10.988 [2024-07-10 12:41:20.329188] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:10.988 [2024-07-10 12:41:20.329198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:10.988 [2024-07-10 12:41:20.329208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:10.988 [2024-07-10 12:41:20.329217] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:10.988 [2024-07-10 12:41:20.329226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:10.988 [2024-07-10 12:41:20.329235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:10.988 [2024-07-10 12:41:20.329244] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:10.988 [2024-07-10 12:41:20.329253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:10.988 [2024-07-10 12:41:20.329263] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:10.988 [2024-07-10 12:41:20.329274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:10.988 [2024-07-10 12:41:20.329286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:39:10.988 [2024-07-10 12:41:20.329296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:39:10.988 [2024-07-10 12:41:20.329306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:39:10.989 [2024-07-10 12:41:20.329315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:39:10.989 [2024-07-10 12:41:20.329325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:39:10.989 [2024-07-10 12:41:20.329335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:39:10.989 [2024-07-10 12:41:20.329345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:39:10.989 [2024-07-10 12:41:20.329355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:39:10.989 [2024-07-10 12:41:20.329365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:39:10.989 [2024-07-10 12:41:20.329374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:39:10.989 [2024-07-10 12:41:20.329384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:39:10.989 [2024-07-10 12:41:20.329394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:39:10.989 [2024-07-10 12:41:20.329404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:39:10.989 [2024-07-10 12:41:20.329414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:39:10.989 [2024-07-10 12:41:20.329424] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:10.989 [2024-07-10 12:41:20.329443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:10.989 [2024-07-10 12:41:20.329455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:10.989 [2024-07-10 12:41:20.329465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:10.989 [2024-07-10 12:41:20.329476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:10.989 [2024-07-10 12:41:20.329486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:10.989 [2024-07-10 12:41:20.329497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.989 [2024-07-10 12:41:20.329507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:10.989 [2024-07-10 12:41:20.329517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:39:10.989 [2024-07-10 12:41:20.329526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.989 [2024-07-10 12:41:20.381849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.989 [2024-07-10 12:41:20.381888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:10.989 [2024-07-10 12:41:20.381903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.361 ms 00:39:10.989 [2024-07-10 12:41:20.381914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.989 [2024-07-10 12:41:20.381995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.989 [2024-07-10 12:41:20.382007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:10.989 [2024-07-10 12:41:20.382018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:39:10.989 [2024-07-10 12:41:20.382028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.989 [2024-07-10 12:41:20.428721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.989 [2024-07-10 12:41:20.428771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:10.989 [2024-07-10 12:41:20.428785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.710 ms 00:39:10.989 [2024-07-10 12:41:20.428796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.989 [2024-07-10 12:41:20.428833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.989 [2024-07-10 12:41:20.428844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:10.989 [2024-07-10 12:41:20.428860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:10.989 [2024-07-10 12:41:20.428870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.989 [2024-07-10 12:41:20.428980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.989 [2024-07-10 12:41:20.428994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:10.989 [2024-07-10 12:41:20.429005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:39:10.989 [2024-07-10 12:41:20.429016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.989 [2024-07-10 12:41:20.429131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.989 [2024-07-10 12:41:20.429144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:10.989 [2024-07-10 12:41:20.429155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:39:10.989 [2024-07-10 12:41:20.429168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.989 [2024-07-10 12:41:20.448816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.989 [2024-07-10 12:41:20.448975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:10.989 [2024-07-10 12:41:20.449056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.657 ms 00:39:10.989 [2024-07-10 12:41:20.449091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.989 [2024-07-10 12:41:20.449255] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:39:10.989 [2024-07-10 12:41:20.449374] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:10.989 [2024-07-10 12:41:20.449427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.989 [2024-07-10 12:41:20.449457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:10.989 [2024-07-10 12:41:20.449522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:39:10.989 [2024-07-10 12:41:20.449555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.989 [2024-07-10 12:41:20.460011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.989 [2024-07-10 12:41:20.460130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:10.989 [2024-07-10 12:41:20.460207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.424 ms 00:39:10.989 [2024-07-10 12:41:20.460241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.989 [2024-07-10 12:41:20.460374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.989 [2024-07-10 12:41:20.460406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:10.989 [2024-07-10 12:41:20.460435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:39:10.989 [2024-07-10 12:41:20.460464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.989 [2024-07-10 12:41:20.460585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.989 [2024-07-10 12:41:20.460624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:10.989 [2024-07-10 12:41:20.460661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:39:10.989 [2024-07-10 12:41:20.460690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.989 [2024-07-10 12:41:20.461441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.989 [2024-07-10 12:41:20.461552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:10.989 [2024-07-10 12:41:20.461619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:39:10.989 [2024-07-10 12:41:20.461634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:10.989 [2024-07-10 12:41:20.461659] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:39:10.989 [2024-07-10 12:41:20.461672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:10.989 [2024-07-10 12:41:20.461695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:10.989 [2024-07-10 12:41:20.461710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:39:10.989 [2024-07-10 12:41:20.461719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.248 [2024-07-10 12:41:20.475012] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:39:11.248 [2024-07-10 12:41:20.475223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.248 [2024-07-10 12:41:20.475238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:11.248 [2024-07-10 12:41:20.475250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.489 ms 00:39:11.248 [2024-07-10 12:41:20.475260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.248 [2024-07-10 12:41:20.477288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.248 [2024-07-10 12:41:20.477317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:11.248 [2024-07-10 12:41:20.477329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.009 ms 00:39:11.248 [2024-07-10 12:41:20.477344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.248 [2024-07-10 12:41:20.477444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.248 [2024-07-10 12:41:20.477457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:11.248 [2024-07-10 12:41:20.477468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:39:11.248 [2024-07-10 12:41:20.477478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.248 [2024-07-10 12:41:20.477504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.248 [2024-07-10 12:41:20.477515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:11.248 [2024-07-10 12:41:20.477525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:11.248 [2024-07-10 12:41:20.477534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.248 [2024-07-10 12:41:20.477573] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:11.248 [2024-07-10 12:41:20.477585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.248 [2024-07-10 12:41:20.477595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:11.248 [2024-07-10 12:41:20.477605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:39:11.248 [2024-07-10 12:41:20.477615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.248 [2024-07-10 12:41:20.514711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.248 [2024-07-10 12:41:20.514889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:11.248 [2024-07-10 12:41:20.514964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.135 ms 00:39:11.248 [2024-07-10 12:41:20.515008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.248 [2024-07-10 12:41:20.515111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.248 [2024-07-10 12:41:20.515149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:11.248 [2024-07-10 12:41:20.515180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:39:11.248 [2024-07-10 12:41:20.515210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.248 [2024-07-10 12:41:20.516450] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 197.819 ms, result 0 00:39:45.566  Copying: 34/1024 [MB] (34 MBps) Copying: 63/1024 [MB] (29 MBps) Copying: 92/1024 [MB] (28 MBps) Copying: 122/1024 [MB] (29 MBps) Copying: 150/1024 [MB] (28 MBps) Copying: 184/1024 [MB] (33 MBps) Copying: 214/1024 [MB] (30 MBps) Copying: 245/1024 [MB] (31 MBps) Copying: 279/1024 [MB] (33 MBps) Copying: 311/1024 [MB] (32 MBps) Copying: 345/1024 [MB] (33 MBps) Copying: 377/1024 [MB] (32 MBps) Copying: 411/1024 [MB] (33 MBps) Copying: 445/1024 [MB] (34 MBps) Copying: 477/1024 [MB] (32 MBps) Copying: 507/1024 [MB] (29 MBps) Copying: 537/1024 [MB] (29 MBps) Copying: 566/1024 [MB] (29 MBps) Copying: 597/1024 [MB] (30 MBps) Copying: 625/1024 [MB] (28 MBps) Copying: 655/1024 [MB] (29 MBps) Copying: 683/1024 [MB] (28 MBps) Copying: 715/1024 [MB] (31 MBps) Copying: 748/1024 [MB] (32 MBps) Copying: 776/1024 [MB] (28 MBps) Copying: 805/1024 [MB] (28 MBps) Copying: 833/1024 [MB] (28 MBps) Copying: 862/1024 [MB] (28 MBps) Copying: 890/1024 [MB] (27 MBps) Copying: 918/1024 [MB] (28 MBps) Copying: 947/1024 [MB] (29 MBps) Copying: 975/1024 [MB] (27 MBps) Copying: 1005/1024 [MB] (29 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-10 12:41:54.789966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.566 [2024-07-10 12:41:54.790312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:45.566 [2024-07-10 12:41:54.790437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:45.566 [2024-07-10 12:41:54.790498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.566 [2024-07-10 12:41:54.790704] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:45.566 [2024-07-10 12:41:54.795023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.566 [2024-07-10 12:41:54.795178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:45.566 [2024-07-10 12:41:54.795277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.141 ms 00:39:45.566 [2024-07-10 12:41:54.795317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.566 [2024-07-10 12:41:54.795667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.566 [2024-07-10 12:41:54.795790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:45.566 [2024-07-10 12:41:54.795878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:39:45.566 [2024-07-10 12:41:54.795926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.566 [2024-07-10 12:41:54.796392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.566 [2024-07-10 12:41:54.796441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:39:45.566 [2024-07-10 12:41:54.796552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:45.566 [2024-07-10 12:41:54.796608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.566 [2024-07-10 12:41:54.796816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.566 [2024-07-10 12:41:54.796940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:39:45.566 [2024-07-10 12:41:54.797030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:39:45.566 [2024-07-10 12:41:54.797111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.566 [2024-07-10 12:41:54.797174] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:45.566 [2024-07-10 12:41:54.797314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:45.566 [2024-07-10 12:41:54.797393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:45.566 [2024-07-10 12:41:54.797513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:45.566 [2024-07-10 12:41:54.797585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:45.566 [2024-07-10 12:41:54.797719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:45.566 [2024-07-10 12:41:54.797804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:45.566 [2024-07-10 12:41:54.798164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.798280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.798394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.798468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.798546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.798666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.798788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.798850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.798951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.798969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.798985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.799968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.800109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.800450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.800569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.800632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.800807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.800936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:45.567 [2024-07-10 12:41:54.801815] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:45.567 [2024-07-10 12:41:54.801831] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c69e99e1-d5e8-4abd-adc9-da7bdc7f3597 00:39:45.567 [2024-07-10 12:41:54.801847] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:45.568 [2024-07-10 12:41:54.801860] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:39:45.568 [2024-07-10 12:41:54.801879] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:45.568 [2024-07-10 12:41:54.801894] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:45.568 [2024-07-10 12:41:54.801907] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:45.568 [2024-07-10 12:41:54.801922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:45.568 [2024-07-10 12:41:54.801937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:45.568 [2024-07-10 12:41:54.801950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:45.568 [2024-07-10 12:41:54.801966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:45.568 [2024-07-10 12:41:54.801981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.568 [2024-07-10 12:41:54.801995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:45.568 [2024-07-10 12:41:54.802011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.816 ms 00:39:45.568 [2024-07-10 12:41:54.802025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.568 [2024-07-10 12:41:54.822845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.568 [2024-07-10 12:41:54.822882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:45.568 [2024-07-10 12:41:54.822897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.822 ms 00:39:45.568 [2024-07-10 12:41:54.822908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.568 [2024-07-10 12:41:54.823509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.568 [2024-07-10 12:41:54.823527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:45.568 [2024-07-10 12:41:54.823544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:39:45.568 [2024-07-10 12:41:54.823560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.568 [2024-07-10 12:41:54.867932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:45.568 [2024-07-10 12:41:54.867991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:45.568 [2024-07-10 12:41:54.868008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:45.568 [2024-07-10 12:41:54.868026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.568 [2024-07-10 12:41:54.868116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:45.568 [2024-07-10 12:41:54.868131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:45.568 [2024-07-10 12:41:54.868146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:45.568 [2024-07-10 12:41:54.868170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.568 [2024-07-10 12:41:54.868244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:45.568 [2024-07-10 12:41:54.868266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:45.568 [2024-07-10 12:41:54.868278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:45.568 [2024-07-10 12:41:54.868289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.568 [2024-07-10 12:41:54.868308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:45.568 [2024-07-10 12:41:54.868320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:45.568 [2024-07-10 12:41:54.868331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:45.568 [2024-07-10 12:41:54.868342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.568 [2024-07-10 12:41:54.989176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:45.568 [2024-07-10 12:41:54.989262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:45.568 [2024-07-10 12:41:54.989286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:45.568 [2024-07-10 12:41:54.989307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.825 [2024-07-10 12:41:55.089830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:45.825 [2024-07-10 12:41:55.089898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:45.825 [2024-07-10 12:41:55.089919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:45.825 [2024-07-10 12:41:55.089932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.825 [2024-07-10 12:41:55.090014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:45.825 [2024-07-10 12:41:55.090026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:45.825 [2024-07-10 12:41:55.090045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:45.825 [2024-07-10 12:41:55.090057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.825 [2024-07-10 12:41:55.090098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:45.825 [2024-07-10 12:41:55.090111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:45.825 [2024-07-10 12:41:55.090121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:45.825 [2024-07-10 12:41:55.090132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.825 [2024-07-10 12:41:55.090226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:45.825 [2024-07-10 12:41:55.090240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:45.825 [2024-07-10 12:41:55.090257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:45.825 [2024-07-10 12:41:55.090268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.825 [2024-07-10 12:41:55.090298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:45.825 [2024-07-10 12:41:55.090312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:45.825 [2024-07-10 12:41:55.090323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:45.825 [2024-07-10 12:41:55.090339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.825 [2024-07-10 12:41:55.090384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:45.825 [2024-07-10 12:41:55.090401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:45.825 [2024-07-10 12:41:55.090417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:45.825 [2024-07-10 12:41:55.090437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.825 [2024-07-10 12:41:55.090498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:45.825 [2024-07-10 12:41:55.090515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:45.825 [2024-07-10 12:41:55.090531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:45.825 [2024-07-10 12:41:55.090546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.825 [2024-07-10 12:41:55.090709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 301.192 ms, result 0 00:39:47.197 00:39:47.197 00:39:47.197 12:41:56 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:39:49.093 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:39:49.093 12:41:58 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:39:49.093 [2024-07-10 12:41:58.357549] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:39:49.093 [2024-07-10 12:41:58.357681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88125 ] 00:39:49.093 [2024-07-10 12:41:58.521323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:49.351 [2024-07-10 12:41:58.792659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:49.919 [2024-07-10 12:41:59.196287] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:49.919 [2024-07-10 12:41:59.196364] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:49.919 [2024-07-10 12:41:59.360326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.919 [2024-07-10 12:41:59.360388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:49.919 [2024-07-10 12:41:59.360406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:49.919 [2024-07-10 12:41:59.360416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.919 [2024-07-10 12:41:59.360476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.919 [2024-07-10 12:41:59.360490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:49.919 [2024-07-10 12:41:59.360501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:39:49.919 [2024-07-10 12:41:59.360514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.919 [2024-07-10 12:41:59.360537] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:49.919 [2024-07-10 12:41:59.361679] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:49.919 [2024-07-10 12:41:59.361721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.919 [2024-07-10 12:41:59.361758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:49.919 [2024-07-10 12:41:59.361778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:39:49.919 [2024-07-10 12:41:59.361795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.919 [2024-07-10 12:41:59.362254] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:39:49.919 [2024-07-10 12:41:59.362281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.919 [2024-07-10 12:41:59.362293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:49.919 [2024-07-10 12:41:59.362305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:39:49.919 [2024-07-10 12:41:59.362318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.919 [2024-07-10 12:41:59.362378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.919 [2024-07-10 12:41:59.362398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:49.919 [2024-07-10 12:41:59.362416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:39:49.919 [2024-07-10 12:41:59.362433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.919 [2024-07-10 12:41:59.362928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.919 [2024-07-10 12:41:59.362952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:49.919 [2024-07-10 12:41:59.362967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:39:49.919 [2024-07-10 12:41:59.362983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.919 [2024-07-10 12:41:59.363062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.919 [2024-07-10 12:41:59.363077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:49.919 [2024-07-10 12:41:59.363090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:39:49.919 [2024-07-10 12:41:59.363102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.919 [2024-07-10 12:41:59.363135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.919 [2024-07-10 12:41:59.363148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:49.919 [2024-07-10 12:41:59.363161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:39:49.919 [2024-07-10 12:41:59.363173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.919 [2024-07-10 12:41:59.363203] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:49.919 [2024-07-10 12:41:59.368837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.919 [2024-07-10 12:41:59.368871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:49.919 [2024-07-10 12:41:59.368888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.646 ms 00:39:49.919 [2024-07-10 12:41:59.368898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.919 [2024-07-10 12:41:59.368935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.919 [2024-07-10 12:41:59.368946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:49.919 [2024-07-10 12:41:59.368957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:39:49.919 [2024-07-10 12:41:59.368967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.919 [2024-07-10 12:41:59.369018] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:49.919 [2024-07-10 12:41:59.369044] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:49.919 [2024-07-10 12:41:59.369078] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:49.919 [2024-07-10 12:41:59.369099] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:39:49.919 [2024-07-10 12:41:59.369180] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:49.919 [2024-07-10 12:41:59.369193] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:49.919 [2024-07-10 12:41:59.369206] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:39:49.919 [2024-07-10 12:41:59.369219] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:49.919 [2024-07-10 12:41:59.369231] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:49.919 [2024-07-10 12:41:59.369243] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:39:49.919 [2024-07-10 12:41:59.369254] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:49.919 [2024-07-10 12:41:59.369263] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:49.919 [2024-07-10 12:41:59.369277] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:49.919 [2024-07-10 12:41:59.369287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.919 [2024-07-10 12:41:59.369297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:49.919 [2024-07-10 12:41:59.369308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:39:49.919 [2024-07-10 12:41:59.369318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.919 [2024-07-10 12:41:59.369384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.919 [2024-07-10 12:41:59.369394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:49.919 [2024-07-10 12:41:59.369405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:39:49.919 [2024-07-10 12:41:59.369415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:49.919 [2024-07-10 12:41:59.369499] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:49.919 [2024-07-10 12:41:59.369512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:49.919 [2024-07-10 12:41:59.369523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:49.919 [2024-07-10 12:41:59.369533] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:49.919 [2024-07-10 12:41:59.369543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:49.919 [2024-07-10 12:41:59.369553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:49.919 [2024-07-10 12:41:59.369562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:39:49.919 [2024-07-10 12:41:59.369573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:49.919 [2024-07-10 12:41:59.369582] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:39:49.919 [2024-07-10 12:41:59.369591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:49.919 [2024-07-10 12:41:59.369601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:49.919 [2024-07-10 12:41:59.369610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:39:49.919 [2024-07-10 12:41:59.369623] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:49.919 [2024-07-10 12:41:59.369633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:49.919 [2024-07-10 12:41:59.369642] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:39:49.919 [2024-07-10 12:41:59.369651] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:49.919 [2024-07-10 12:41:59.369660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:49.919 [2024-07-10 12:41:59.369670] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:39:49.919 [2024-07-10 12:41:59.369679] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:49.919 [2024-07-10 12:41:59.369688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:49.919 [2024-07-10 12:41:59.369698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:39:49.919 [2024-07-10 12:41:59.369708] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:49.919 [2024-07-10 12:41:59.369746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:49.919 [2024-07-10 12:41:59.369757] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:39:49.919 [2024-07-10 12:41:59.369766] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:49.919 [2024-07-10 12:41:59.369775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:49.920 [2024-07-10 12:41:59.369785] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:39:49.920 [2024-07-10 12:41:59.369794] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:49.920 [2024-07-10 12:41:59.369803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:49.920 [2024-07-10 12:41:59.369812] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:39:49.920 [2024-07-10 12:41:59.369822] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:49.920 [2024-07-10 12:41:59.369831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:49.920 [2024-07-10 12:41:59.369840] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:39:49.920 [2024-07-10 12:41:59.369849] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:49.920 [2024-07-10 12:41:59.369859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:49.920 [2024-07-10 12:41:59.369868] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:39:49.920 [2024-07-10 12:41:59.369877] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:49.920 [2024-07-10 12:41:59.369886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:49.920 [2024-07-10 12:41:59.369895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:39:49.920 [2024-07-10 12:41:59.369904] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:49.920 [2024-07-10 12:41:59.369913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:49.920 [2024-07-10 12:41:59.369923] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:39:49.920 [2024-07-10 12:41:59.369933] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:49.920 [2024-07-10 12:41:59.369941] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:49.920 [2024-07-10 12:41:59.369953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:49.920 [2024-07-10 12:41:59.369963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:49.920 [2024-07-10 12:41:59.369972] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:49.920 [2024-07-10 12:41:59.369982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:49.920 [2024-07-10 12:41:59.369992] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:49.920 [2024-07-10 12:41:59.370001] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:49.920 [2024-07-10 12:41:59.370010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:49.920 [2024-07-10 12:41:59.370020] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:49.920 [2024-07-10 12:41:59.370029] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:49.920 [2024-07-10 12:41:59.370039] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:49.920 [2024-07-10 12:41:59.370051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:49.920 [2024-07-10 12:41:59.370063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:39:49.920 [2024-07-10 12:41:59.370073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:39:49.920 [2024-07-10 12:41:59.370085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:39:49.920 [2024-07-10 12:41:59.370096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:39:49.920 [2024-07-10 12:41:59.370106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:39:49.920 [2024-07-10 12:41:59.370116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:39:49.920 [2024-07-10 12:41:59.370126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:39:49.920 [2024-07-10 12:41:59.370137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:39:49.920 [2024-07-10 12:41:59.370147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:39:49.920 [2024-07-10 12:41:59.370157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:39:49.920 [2024-07-10 12:41:59.370167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:39:49.920 [2024-07-10 12:41:59.370177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:39:49.920 [2024-07-10 12:41:59.370188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:39:49.920 [2024-07-10 12:41:59.370198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:39:49.920 [2024-07-10 12:41:59.370208] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:49.920 [2024-07-10 12:41:59.370222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:49.920 [2024-07-10 12:41:59.370234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:49.920 [2024-07-10 12:41:59.370245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:49.920 [2024-07-10 12:41:59.370257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:49.920 [2024-07-10 12:41:59.370267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:49.920 [2024-07-10 12:41:59.370278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:49.920 [2024-07-10 12:41:59.370290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:49.920 [2024-07-10 12:41:59.370300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:39:49.920 [2024-07-10 12:41:59.370310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.180 [2024-07-10 12:41:59.420818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.180 [2024-07-10 12:41:59.420872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:50.180 [2024-07-10 12:41:59.420889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.542 ms 00:39:50.180 [2024-07-10 12:41:59.420901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.180 [2024-07-10 12:41:59.421006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.180 [2024-07-10 12:41:59.421017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:50.180 [2024-07-10 12:41:59.421029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:39:50.180 [2024-07-10 12:41:59.421039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.180 [2024-07-10 12:41:59.470596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.180 [2024-07-10 12:41:59.470651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:50.180 [2024-07-10 12:41:59.470668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.554 ms 00:39:50.180 [2024-07-10 12:41:59.470679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.180 [2024-07-10 12:41:59.470755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.180 [2024-07-10 12:41:59.470768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:50.180 [2024-07-10 12:41:59.470784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:50.180 [2024-07-10 12:41:59.470795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.180 [2024-07-10 12:41:59.470930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.180 [2024-07-10 12:41:59.470946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:50.180 [2024-07-10 12:41:59.470958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:39:50.180 [2024-07-10 12:41:59.470968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.180 [2024-07-10 12:41:59.471086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.180 [2024-07-10 12:41:59.471099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:50.180 [2024-07-10 12:41:59.471109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:39:50.180 [2024-07-10 12:41:59.471125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.493066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.181 [2024-07-10 12:41:59.493136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:50.181 [2024-07-10 12:41:59.493167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.949 ms 00:39:50.181 [2024-07-10 12:41:59.493184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.493416] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:39:50.181 [2024-07-10 12:41:59.493444] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:50.181 [2024-07-10 12:41:59.493466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.181 [2024-07-10 12:41:59.493485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:50.181 [2024-07-10 12:41:59.493504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:39:50.181 [2024-07-10 12:41:59.493521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.504088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.181 [2024-07-10 12:41:59.504131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:50.181 [2024-07-10 12:41:59.504155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.547 ms 00:39:50.181 [2024-07-10 12:41:59.504173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.504303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.181 [2024-07-10 12:41:59.504319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:50.181 [2024-07-10 12:41:59.504333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:39:50.181 [2024-07-10 12:41:59.504346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.504405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.181 [2024-07-10 12:41:59.504421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:50.181 [2024-07-10 12:41:59.504439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:39:50.181 [2024-07-10 12:41:59.504452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.505168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.181 [2024-07-10 12:41:59.505208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:50.181 [2024-07-10 12:41:59.505223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:39:50.181 [2024-07-10 12:41:59.505237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.505265] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:39:50.181 [2024-07-10 12:41:59.505281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.181 [2024-07-10 12:41:59.505306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:50.181 [2024-07-10 12:41:59.505326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:39:50.181 [2024-07-10 12:41:59.505339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.518318] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:39:50.181 [2024-07-10 12:41:59.518563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.181 [2024-07-10 12:41:59.518581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:50.181 [2024-07-10 12:41:59.518597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.218 ms 00:39:50.181 [2024-07-10 12:41:59.518612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.520636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.181 [2024-07-10 12:41:59.520676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:50.181 [2024-07-10 12:41:59.520691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.987 ms 00:39:50.181 [2024-07-10 12:41:59.520708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.520838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.181 [2024-07-10 12:41:59.520855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:50.181 [2024-07-10 12:41:59.520869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:39:50.181 [2024-07-10 12:41:59.520882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.520911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.181 [2024-07-10 12:41:59.520924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:50.181 [2024-07-10 12:41:59.520938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:50.181 [2024-07-10 12:41:59.520950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.520987] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:50.181 [2024-07-10 12:41:59.521002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.181 [2024-07-10 12:41:59.521014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:50.181 [2024-07-10 12:41:59.521028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:39:50.181 [2024-07-10 12:41:59.521040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.560801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.181 [2024-07-10 12:41:59.560862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:50.181 [2024-07-10 12:41:59.560879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.801 ms 00:39:50.181 [2024-07-10 12:41:59.560899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.560993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:50.181 [2024-07-10 12:41:59.561007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:50.181 [2024-07-10 12:41:59.561020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:39:50.181 [2024-07-10 12:41:59.561031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:50.181 [2024-07-10 12:41:59.562359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 201.834 ms, result 0 00:40:26.279  Copying: 28/1024 [MB] (28 MBps) Copying: 57/1024 [MB] (28 MBps) Copying: 87/1024 [MB] (30 MBps) Copying: 118/1024 [MB] (30 MBps) Copying: 148/1024 [MB] (30 MBps) Copying: 178/1024 [MB] (29 MBps) Copying: 206/1024 [MB] (28 MBps) Copying: 235/1024 [MB] (28 MBps) Copying: 263/1024 [MB] (28 MBps) Copying: 292/1024 [MB] (28 MBps) Copying: 321/1024 [MB] (29 MBps) Copying: 350/1024 [MB] (28 MBps) Copying: 378/1024 [MB] (28 MBps) Copying: 406/1024 [MB] (27 MBps) Copying: 435/1024 [MB] (28 MBps) Copying: 462/1024 [MB] (27 MBps) Copying: 489/1024 [MB] (26 MBps) Copying: 518/1024 [MB] (28 MBps) Copying: 548/1024 [MB] (30 MBps) Copying: 576/1024 [MB] (28 MBps) Copying: 606/1024 [MB] (29 MBps) Copying: 634/1024 [MB] (28 MBps) Copying: 665/1024 [MB] (30 MBps) Copying: 695/1024 [MB] (30 MBps) Copying: 723/1024 [MB] (27 MBps) Copying: 755/1024 [MB] (31 MBps) Copying: 785/1024 [MB] (30 MBps) Copying: 814/1024 [MB] (28 MBps) Copying: 843/1024 [MB] (29 MBps) Copying: 873/1024 [MB] (29 MBps) Copying: 902/1024 [MB] (29 MBps) Copying: 931/1024 [MB] (28 MBps) Copying: 960/1024 [MB] (28 MBps) Copying: 989/1024 [MB] (29 MBps) Copying: 1019/1024 [MB] (29 MBps) Copying: 1048568/1048576 [kB] (4784 kBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-10 12:42:35.536826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:26.279 [2024-07-10 12:42:35.536891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:26.279 [2024-07-10 12:42:35.536919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:26.279 [2024-07-10 12:42:35.536932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.279 [2024-07-10 12:42:35.538173] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:26.279 [2024-07-10 12:42:35.543828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:26.279 [2024-07-10 12:42:35.543867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:26.279 [2024-07-10 12:42:35.543883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.622 ms 00:40:26.280 [2024-07-10 12:42:35.543894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.280 [2024-07-10 12:42:35.552772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:26.280 [2024-07-10 12:42:35.552811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:26.280 [2024-07-10 12:42:35.552826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.950 ms 00:40:26.280 [2024-07-10 12:42:35.552846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.280 [2024-07-10 12:42:35.552877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:26.280 [2024-07-10 12:42:35.552889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:40:26.280 [2024-07-10 12:42:35.552900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:26.280 [2024-07-10 12:42:35.552910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.280 [2024-07-10 12:42:35.552957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:26.280 [2024-07-10 12:42:35.552969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:40:26.280 [2024-07-10 12:42:35.552980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:40:26.280 [2024-07-10 12:42:35.552990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.280 [2024-07-10 12:42:35.553009] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:26.280 [2024-07-10 12:42:35.553023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128768 / 261120 wr_cnt: 1 state: open 00:40:26.280 [2024-07-10 12:42:35.553036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:26.280 [2024-07-10 12:42:35.553800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.553996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.554008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.554019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.554037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.554047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.554058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.554069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.554080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.554091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.554102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.554112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.554123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.554133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.554144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:26.281 [2024-07-10 12:42:35.554163] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:26.281 [2024-07-10 12:42:35.554173] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c69e99e1-d5e8-4abd-adc9-da7bdc7f3597 00:40:26.281 [2024-07-10 12:42:35.554184] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128768 00:40:26.281 [2024-07-10 12:42:35.554195] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128800 00:40:26.281 [2024-07-10 12:42:35.554204] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128768 00:40:26.281 [2024-07-10 12:42:35.554215] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:40:26.281 [2024-07-10 12:42:35.554225] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:26.281 [2024-07-10 12:42:35.554235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:26.281 [2024-07-10 12:42:35.554245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:26.281 [2024-07-10 12:42:35.554253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:26.281 [2024-07-10 12:42:35.554263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:26.281 [2024-07-10 12:42:35.554273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:26.281 [2024-07-10 12:42:35.554283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:26.281 [2024-07-10 12:42:35.554297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.266 ms 00:40:26.281 [2024-07-10 12:42:35.554308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.281 [2024-07-10 12:42:35.574330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:26.281 [2024-07-10 12:42:35.574369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:26.281 [2024-07-10 12:42:35.574385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.036 ms 00:40:26.281 [2024-07-10 12:42:35.574395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.281 [2024-07-10 12:42:35.574918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:26.281 [2024-07-10 12:42:35.574935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:26.281 [2024-07-10 12:42:35.574946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:40:26.281 [2024-07-10 12:42:35.574956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.281 [2024-07-10 12:42:35.618285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:26.281 [2024-07-10 12:42:35.618335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:26.281 [2024-07-10 12:42:35.618351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:26.281 [2024-07-10 12:42:35.618361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.281 [2024-07-10 12:42:35.618430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:26.281 [2024-07-10 12:42:35.618442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:26.281 [2024-07-10 12:42:35.618453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:26.281 [2024-07-10 12:42:35.618463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.281 [2024-07-10 12:42:35.618526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:26.281 [2024-07-10 12:42:35.618539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:26.281 [2024-07-10 12:42:35.618551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:26.281 [2024-07-10 12:42:35.618561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.281 [2024-07-10 12:42:35.618582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:26.281 [2024-07-10 12:42:35.618594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:26.281 [2024-07-10 12:42:35.618605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:26.281 [2024-07-10 12:42:35.618615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.281 [2024-07-10 12:42:35.743362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:26.281 [2024-07-10 12:42:35.743416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:26.281 [2024-07-10 12:42:35.743433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:26.281 [2024-07-10 12:42:35.743444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.540 [2024-07-10 12:42:35.845778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:26.541 [2024-07-10 12:42:35.845856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:26.541 [2024-07-10 12:42:35.845873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:26.541 [2024-07-10 12:42:35.845884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.541 [2024-07-10 12:42:35.845966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:26.541 [2024-07-10 12:42:35.845978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:26.541 [2024-07-10 12:42:35.845990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:26.541 [2024-07-10 12:42:35.846000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.541 [2024-07-10 12:42:35.846038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:26.541 [2024-07-10 12:42:35.846051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:26.541 [2024-07-10 12:42:35.846071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:26.541 [2024-07-10 12:42:35.846080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.541 [2024-07-10 12:42:35.846166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:26.541 [2024-07-10 12:42:35.846184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:26.541 [2024-07-10 12:42:35.846195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:26.541 [2024-07-10 12:42:35.846205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.541 [2024-07-10 12:42:35.846233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:26.541 [2024-07-10 12:42:35.846245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:26.541 [2024-07-10 12:42:35.846260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:26.541 [2024-07-10 12:42:35.846270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.541 [2024-07-10 12:42:35.846309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:26.541 [2024-07-10 12:42:35.846320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:26.541 [2024-07-10 12:42:35.846330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:26.541 [2024-07-10 12:42:35.846340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.541 [2024-07-10 12:42:35.846386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:26.541 [2024-07-10 12:42:35.846397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:26.541 [2024-07-10 12:42:35.846410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:26.541 [2024-07-10 12:42:35.846420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:26.541 [2024-07-10 12:42:35.846542] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 312.653 ms, result 0 00:40:28.450 00:40:28.450 00:40:28.450 12:42:37 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:40:28.450 [2024-07-10 12:42:37.553973] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:40:28.450 [2024-07-10 12:42:37.554111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88506 ] 00:40:28.450 [2024-07-10 12:42:37.728926] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:28.710 [2024-07-10 12:42:37.975443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.968 [2024-07-10 12:42:38.374669] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:28.968 [2024-07-10 12:42:38.374756] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:29.228 [2024-07-10 12:42:38.538087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.228 [2024-07-10 12:42:38.538157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:29.228 [2024-07-10 12:42:38.538176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:29.228 [2024-07-10 12:42:38.538187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.228 [2024-07-10 12:42:38.538246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.228 [2024-07-10 12:42:38.538260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:29.228 [2024-07-10 12:42:38.538271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:40:29.228 [2024-07-10 12:42:38.538285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.228 [2024-07-10 12:42:38.538307] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:29.228 [2024-07-10 12:42:38.539390] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:29.228 [2024-07-10 12:42:38.539414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.228 [2024-07-10 12:42:38.539428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:29.228 [2024-07-10 12:42:38.539439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:40:29.228 [2024-07-10 12:42:38.539450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.228 [2024-07-10 12:42:38.539815] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:40:29.228 [2024-07-10 12:42:38.539839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.228 [2024-07-10 12:42:38.539850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:29.228 [2024-07-10 12:42:38.539862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:40:29.228 [2024-07-10 12:42:38.539876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.228 [2024-07-10 12:42:38.539967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.228 [2024-07-10 12:42:38.539981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:29.228 [2024-07-10 12:42:38.539992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:40:29.228 [2024-07-10 12:42:38.540010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.228 [2024-07-10 12:42:38.540447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.228 [2024-07-10 12:42:38.540462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:29.228 [2024-07-10 12:42:38.540473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:40:29.228 [2024-07-10 12:42:38.540486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.228 [2024-07-10 12:42:38.540558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.228 [2024-07-10 12:42:38.540571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:29.228 [2024-07-10 12:42:38.540582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:40:29.228 [2024-07-10 12:42:38.540592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.228 [2024-07-10 12:42:38.540619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.228 [2024-07-10 12:42:38.540630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:29.228 [2024-07-10 12:42:38.540640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:40:29.228 [2024-07-10 12:42:38.540650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.228 [2024-07-10 12:42:38.540675] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:29.228 [2024-07-10 12:42:38.546092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.228 [2024-07-10 12:42:38.546123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:29.228 [2024-07-10 12:42:38.546140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.430 ms 00:40:29.228 [2024-07-10 12:42:38.546150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.228 [2024-07-10 12:42:38.546186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.228 [2024-07-10 12:42:38.546197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:29.228 [2024-07-10 12:42:38.546207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:29.228 [2024-07-10 12:42:38.546217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.228 [2024-07-10 12:42:38.546267] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:29.228 [2024-07-10 12:42:38.546292] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:29.228 [2024-07-10 12:42:38.546327] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:29.228 [2024-07-10 12:42:38.546347] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:40:29.228 [2024-07-10 12:42:38.546432] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:29.228 [2024-07-10 12:42:38.546446] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:29.228 [2024-07-10 12:42:38.546459] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:40:29.229 [2024-07-10 12:42:38.546482] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:29.229 [2024-07-10 12:42:38.546494] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:29.229 [2024-07-10 12:42:38.546505] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:40:29.229 [2024-07-10 12:42:38.546516] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:29.229 [2024-07-10 12:42:38.546525] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:29.229 [2024-07-10 12:42:38.546538] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:29.229 [2024-07-10 12:42:38.546549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.229 [2024-07-10 12:42:38.546559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:29.229 [2024-07-10 12:42:38.546569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:40:29.229 [2024-07-10 12:42:38.546579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.229 [2024-07-10 12:42:38.546648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.229 [2024-07-10 12:42:38.546659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:29.229 [2024-07-10 12:42:38.546669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:40:29.229 [2024-07-10 12:42:38.546679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.229 [2024-07-10 12:42:38.546773] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:29.229 [2024-07-10 12:42:38.546787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:29.229 [2024-07-10 12:42:38.546797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:29.229 [2024-07-10 12:42:38.546808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:29.229 [2024-07-10 12:42:38.546818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:29.229 [2024-07-10 12:42:38.546827] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:29.229 [2024-07-10 12:42:38.546837] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:40:29.229 [2024-07-10 12:42:38.546848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:29.229 [2024-07-10 12:42:38.546857] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:40:29.229 [2024-07-10 12:42:38.546867] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:29.229 [2024-07-10 12:42:38.546878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:29.229 [2024-07-10 12:42:38.546887] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:40:29.229 [2024-07-10 12:42:38.546896] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:29.229 [2024-07-10 12:42:38.546906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:29.229 [2024-07-10 12:42:38.546916] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:40:29.229 [2024-07-10 12:42:38.546925] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:29.229 [2024-07-10 12:42:38.546935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:29.229 [2024-07-10 12:42:38.546945] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:40:29.229 [2024-07-10 12:42:38.546954] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:29.229 [2024-07-10 12:42:38.546964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:29.229 [2024-07-10 12:42:38.546974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:40:29.229 [2024-07-10 12:42:38.546983] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:29.229 [2024-07-10 12:42:38.547003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:29.229 [2024-07-10 12:42:38.547013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:40:29.229 [2024-07-10 12:42:38.547022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:29.229 [2024-07-10 12:42:38.547031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:29.229 [2024-07-10 12:42:38.547041] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:40:29.229 [2024-07-10 12:42:38.547050] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:29.229 [2024-07-10 12:42:38.547059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:29.229 [2024-07-10 12:42:38.547069] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:40:29.229 [2024-07-10 12:42:38.547079] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:29.229 [2024-07-10 12:42:38.547089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:29.229 [2024-07-10 12:42:38.547098] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:40:29.229 [2024-07-10 12:42:38.547107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:29.229 [2024-07-10 12:42:38.547116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:29.229 [2024-07-10 12:42:38.547125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:40:29.229 [2024-07-10 12:42:38.547135] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:29.229 [2024-07-10 12:42:38.547144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:29.229 [2024-07-10 12:42:38.547154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:40:29.229 [2024-07-10 12:42:38.547163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:29.229 [2024-07-10 12:42:38.547173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:29.229 [2024-07-10 12:42:38.547182] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:40:29.229 [2024-07-10 12:42:38.547193] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:29.229 [2024-07-10 12:42:38.547202] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:29.229 [2024-07-10 12:42:38.547214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:29.229 [2024-07-10 12:42:38.547223] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:29.229 [2024-07-10 12:42:38.547234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:29.229 [2024-07-10 12:42:38.547244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:29.229 [2024-07-10 12:42:38.547254] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:29.229 [2024-07-10 12:42:38.547263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:29.229 [2024-07-10 12:42:38.547273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:29.229 [2024-07-10 12:42:38.547283] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:29.229 [2024-07-10 12:42:38.547292] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:29.229 [2024-07-10 12:42:38.547303] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:29.229 [2024-07-10 12:42:38.547316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:29.229 [2024-07-10 12:42:38.547328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:40:29.229 [2024-07-10 12:42:38.547340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:40:29.229 [2024-07-10 12:42:38.547351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:40:29.229 [2024-07-10 12:42:38.547361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:40:29.229 [2024-07-10 12:42:38.547372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:40:29.229 [2024-07-10 12:42:38.547383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:40:29.229 [2024-07-10 12:42:38.547393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:40:29.229 [2024-07-10 12:42:38.547404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:40:29.229 [2024-07-10 12:42:38.547415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:40:29.229 [2024-07-10 12:42:38.547426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:40:29.229 [2024-07-10 12:42:38.547437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:40:29.229 [2024-07-10 12:42:38.547447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:40:29.229 [2024-07-10 12:42:38.547457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:40:29.229 [2024-07-10 12:42:38.547468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:40:29.229 [2024-07-10 12:42:38.547478] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:29.229 [2024-07-10 12:42:38.547493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:29.229 [2024-07-10 12:42:38.547504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:29.229 [2024-07-10 12:42:38.547515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:29.229 [2024-07-10 12:42:38.547525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:29.229 [2024-07-10 12:42:38.547537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:29.229 [2024-07-10 12:42:38.547548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.229 [2024-07-10 12:42:38.547559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:29.229 [2024-07-10 12:42:38.547570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:40:29.229 [2024-07-10 12:42:38.547581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.229 [2024-07-10 12:42:38.592026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.229 [2024-07-10 12:42:38.592087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:29.229 [2024-07-10 12:42:38.592104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.470 ms 00:40:29.229 [2024-07-10 12:42:38.592116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.229 [2024-07-10 12:42:38.592228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.229 [2024-07-10 12:42:38.592241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:29.229 [2024-07-10 12:42:38.592253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:40:29.229 [2024-07-10 12:42:38.592263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.229 [2024-07-10 12:42:38.644531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.229 [2024-07-10 12:42:38.644595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:29.229 [2024-07-10 12:42:38.644613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.269 ms 00:40:29.229 [2024-07-10 12:42:38.644623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.229 [2024-07-10 12:42:38.644686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.229 [2024-07-10 12:42:38.644698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:29.230 [2024-07-10 12:42:38.644715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:40:29.230 [2024-07-10 12:42:38.644725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.230 [2024-07-10 12:42:38.644870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.230 [2024-07-10 12:42:38.644885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:29.230 [2024-07-10 12:42:38.644897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:40:29.230 [2024-07-10 12:42:38.644907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.230 [2024-07-10 12:42:38.645036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.230 [2024-07-10 12:42:38.645059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:29.230 [2024-07-10 12:42:38.645070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:40:29.230 [2024-07-10 12:42:38.645084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.230 [2024-07-10 12:42:38.667181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.230 [2024-07-10 12:42:38.667235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:29.230 [2024-07-10 12:42:38.667256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.109 ms 00:40:29.230 [2024-07-10 12:42:38.667266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.230 [2024-07-10 12:42:38.667443] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:40:29.230 [2024-07-10 12:42:38.667461] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:29.230 [2024-07-10 12:42:38.667474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.230 [2024-07-10 12:42:38.667485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:29.230 [2024-07-10 12:42:38.667498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:40:29.230 [2024-07-10 12:42:38.667508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.230 [2024-07-10 12:42:38.678027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.230 [2024-07-10 12:42:38.678067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:29.230 [2024-07-10 12:42:38.678081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.515 ms 00:40:29.230 [2024-07-10 12:42:38.678092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.230 [2024-07-10 12:42:38.678210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.230 [2024-07-10 12:42:38.678222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:29.230 [2024-07-10 12:42:38.678234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:40:29.230 [2024-07-10 12:42:38.678244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.230 [2024-07-10 12:42:38.678299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.230 [2024-07-10 12:42:38.678312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:29.230 [2024-07-10 12:42:38.678327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:40:29.230 [2024-07-10 12:42:38.678338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.230 [2024-07-10 12:42:38.679068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.230 [2024-07-10 12:42:38.679089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:29.230 [2024-07-10 12:42:38.679100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:40:29.230 [2024-07-10 12:42:38.679110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.230 [2024-07-10 12:42:38.679132] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:40:29.230 [2024-07-10 12:42:38.679146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.230 [2024-07-10 12:42:38.679156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:29.230 [2024-07-10 12:42:38.679182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:40:29.230 [2024-07-10 12:42:38.679192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.230 [2024-07-10 12:42:38.692438] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:40:29.230 [2024-07-10 12:42:38.692660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.230 [2024-07-10 12:42:38.692675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:29.230 [2024-07-10 12:42:38.692688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.467 ms 00:40:29.230 [2024-07-10 12:42:38.692699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.230 [2024-07-10 12:42:38.694752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.230 [2024-07-10 12:42:38.694783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:29.230 [2024-07-10 12:42:38.694795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:40:29.230 [2024-07-10 12:42:38.694810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.230 [2024-07-10 12:42:38.694907] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:40:29.230 [2024-07-10 12:42:38.695325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.230 [2024-07-10 12:42:38.695341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:29.230 [2024-07-10 12:42:38.695353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:40:29.230 [2024-07-10 12:42:38.695363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.230 [2024-07-10 12:42:38.695395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.230 [2024-07-10 12:42:38.695406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:29.230 [2024-07-10 12:42:38.695418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:29.230 [2024-07-10 12:42:38.695432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.230 [2024-07-10 12:42:38.695466] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:29.230 [2024-07-10 12:42:38.695479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.230 [2024-07-10 12:42:38.695489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:29.230 [2024-07-10 12:42:38.695500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:40:29.230 [2024-07-10 12:42:38.695510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.494 [2024-07-10 12:42:38.733059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.494 [2024-07-10 12:42:38.733101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:29.494 [2024-07-10 12:42:38.733123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.588 ms 00:40:29.494 [2024-07-10 12:42:38.733134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.494 [2024-07-10 12:42:38.733211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:29.494 [2024-07-10 12:42:38.733224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:29.494 [2024-07-10 12:42:38.733235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:40:29.494 [2024-07-10 12:42:38.733246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:29.494 [2024-07-10 12:42:38.739222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 199.849 ms, result 0 00:41:01.786  Copying: 32/1024 [MB] (32 MBps) Copying: 68/1024 [MB] (35 MBps) Copying: 99/1024 [MB] (31 MBps) Copying: 131/1024 [MB] (32 MBps) Copying: 165/1024 [MB] (33 MBps) Copying: 200/1024 [MB] (34 MBps) Copying: 231/1024 [MB] (30 MBps) Copying: 264/1024 [MB] (32 MBps) Copying: 296/1024 [MB] (32 MBps) Copying: 329/1024 [MB] (32 MBps) Copying: 361/1024 [MB] (32 MBps) Copying: 399/1024 [MB] (38 MBps) Copying: 432/1024 [MB] (32 MBps) Copying: 464/1024 [MB] (32 MBps) Copying: 496/1024 [MB] (32 MBps) Copying: 528/1024 [MB] (31 MBps) Copying: 559/1024 [MB] (31 MBps) Copying: 589/1024 [MB] (30 MBps) Copying: 620/1024 [MB] (30 MBps) Copying: 650/1024 [MB] (30 MBps) Copying: 681/1024 [MB] (30 MBps) Copying: 711/1024 [MB] (30 MBps) Copying: 740/1024 [MB] (28 MBps) Copying: 770/1024 [MB] (29 MBps) Copying: 798/1024 [MB] (28 MBps) Copying: 827/1024 [MB] (28 MBps) Copying: 859/1024 [MB] (31 MBps) Copying: 893/1024 [MB] (33 MBps) Copying: 925/1024 [MB] (32 MBps) Copying: 958/1024 [MB] (33 MBps) Copying: 990/1024 [MB] (31 MBps) Copying: 1021/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 31 MBps)[2024-07-10 12:43:11.120210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:01.786 [2024-07-10 12:43:11.120299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:01.786 [2024-07-10 12:43:11.120333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:41:01.786 [2024-07-10 12:43:11.120347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.786 [2024-07-10 12:43:11.120385] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:01.786 [2024-07-10 12:43:11.126354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:01.786 [2024-07-10 12:43:11.126405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:01.786 [2024-07-10 12:43:11.126427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.952 ms 00:41:01.786 [2024-07-10 12:43:11.126444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.786 [2024-07-10 12:43:11.126773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:01.786 [2024-07-10 12:43:11.126794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:01.786 [2024-07-10 12:43:11.126820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:41:01.786 [2024-07-10 12:43:11.126837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.786 [2024-07-10 12:43:11.126882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:01.786 [2024-07-10 12:43:11.126900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:41:01.786 [2024-07-10 12:43:11.126918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:41:01.786 [2024-07-10 12:43:11.126934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.786 [2024-07-10 12:43:11.127007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:01.786 [2024-07-10 12:43:11.127025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:41:01.786 [2024-07-10 12:43:11.127042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:41:01.786 [2024-07-10 12:43:11.127057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.786 [2024-07-10 12:43:11.127086] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:01.786 [2024-07-10 12:43:11.127108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:41:01.786 [2024-07-10 12:43:11.127128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:01.786 [2024-07-10 12:43:11.127488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.127505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.127522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.127539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.127556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.127573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.127590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.127606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.127624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.127643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.127660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.127677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.127694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.127712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.127977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.128086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.128189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.128377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.128459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.128536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.128688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.128787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.128929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.129996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:01.787 [2024-07-10 12:43:11.130547] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:01.787 [2024-07-10 12:43:11.130563] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c69e99e1-d5e8-4abd-adc9-da7bdc7f3597 00:41:01.787 [2024-07-10 12:43:11.130582] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:41:01.787 [2024-07-10 12:43:11.130598] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4896 00:41:01.787 [2024-07-10 12:43:11.130615] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4864 00:41:01.787 [2024-07-10 12:43:11.130632] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0066 00:41:01.787 [2024-07-10 12:43:11.130648] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:01.787 [2024-07-10 12:43:11.130664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:01.787 [2024-07-10 12:43:11.130681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:01.787 [2024-07-10 12:43:11.130696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:01.787 [2024-07-10 12:43:11.130711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:01.787 [2024-07-10 12:43:11.130741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:01.787 [2024-07-10 12:43:11.130765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:01.787 [2024-07-10 12:43:11.130784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.648 ms 00:41:01.787 [2024-07-10 12:43:11.130800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.787 [2024-07-10 12:43:11.152155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:01.787 [2024-07-10 12:43:11.152200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:01.787 [2024-07-10 12:43:11.152215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.346 ms 00:41:01.787 [2024-07-10 12:43:11.152225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.787 [2024-07-10 12:43:11.152862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:01.787 [2024-07-10 12:43:11.152878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:01.787 [2024-07-10 12:43:11.152890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:41:01.787 [2024-07-10 12:43:11.152900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.788 [2024-07-10 12:43:11.199609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:01.788 [2024-07-10 12:43:11.199663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:01.788 [2024-07-10 12:43:11.199680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:01.788 [2024-07-10 12:43:11.199697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.788 [2024-07-10 12:43:11.199778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:01.788 [2024-07-10 12:43:11.199791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:01.788 [2024-07-10 12:43:11.199802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:01.788 [2024-07-10 12:43:11.199813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.788 [2024-07-10 12:43:11.199884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:01.788 [2024-07-10 12:43:11.199898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:01.788 [2024-07-10 12:43:11.199909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:01.788 [2024-07-10 12:43:11.199919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.788 [2024-07-10 12:43:11.199943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:01.788 [2024-07-10 12:43:11.199954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:01.788 [2024-07-10 12:43:11.199965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:01.788 [2024-07-10 12:43:11.199975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:02.107 [2024-07-10 12:43:11.322887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:02.107 [2024-07-10 12:43:11.322960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:02.107 [2024-07-10 12:43:11.322979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:02.107 [2024-07-10 12:43:11.322997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:02.107 [2024-07-10 12:43:11.426287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:02.107 [2024-07-10 12:43:11.426363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:02.107 [2024-07-10 12:43:11.426381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:02.107 [2024-07-10 12:43:11.426393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:02.107 [2024-07-10 12:43:11.426476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:02.107 [2024-07-10 12:43:11.426488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:02.107 [2024-07-10 12:43:11.426499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:02.107 [2024-07-10 12:43:11.426509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:02.107 [2024-07-10 12:43:11.426548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:02.107 [2024-07-10 12:43:11.426566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:02.107 [2024-07-10 12:43:11.426577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:02.107 [2024-07-10 12:43:11.426588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:02.107 [2024-07-10 12:43:11.426675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:02.107 [2024-07-10 12:43:11.426688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:02.107 [2024-07-10 12:43:11.426699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:02.107 [2024-07-10 12:43:11.426709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:02.107 [2024-07-10 12:43:11.426758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:02.107 [2024-07-10 12:43:11.426772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:02.107 [2024-07-10 12:43:11.426789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:02.107 [2024-07-10 12:43:11.426799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:02.107 [2024-07-10 12:43:11.426843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:02.107 [2024-07-10 12:43:11.426855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:02.107 [2024-07-10 12:43:11.426866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:02.107 [2024-07-10 12:43:11.426876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:02.107 [2024-07-10 12:43:11.426919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:02.107 [2024-07-10 12:43:11.426934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:02.107 [2024-07-10 12:43:11.426944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:02.107 [2024-07-10 12:43:11.426954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:02.107 [2024-07-10 12:43:11.427080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 307.578 ms, result 0 00:41:03.504 00:41:03.504 00:41:03.504 12:43:12 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:41:05.405 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:41:05.405 Process with pid 87109 is not found 00:41:05.405 Remove shared memory files 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 87109 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 87109 ']' 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 87109 00:41:05.405 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (87109) - No such process 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@975 -- # echo 'Process with pid 87109 is not found' 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_c69e99e1-d5e8-4abd-adc9-da7bdc7f3597_band_md /dev/hugepages/ftl_c69e99e1-d5e8-4abd-adc9-da7bdc7f3597_l2p_l1 /dev/hugepages/ftl_c69e99e1-d5e8-4abd-adc9-da7bdc7f3597_l2p_l2 /dev/hugepages/ftl_c69e99e1-d5e8-4abd-adc9-da7bdc7f3597_l2p_l2_ctx /dev/hugepages/ftl_c69e99e1-d5e8-4abd-adc9-da7bdc7f3597_nvc_md /dev/hugepages/ftl_c69e99e1-d5e8-4abd-adc9-da7bdc7f3597_p2l_pool /dev/hugepages/ftl_c69e99e1-d5e8-4abd-adc9-da7bdc7f3597_sb /dev/hugepages/ftl_c69e99e1-d5e8-4abd-adc9-da7bdc7f3597_sb_shm /dev/hugepages/ftl_c69e99e1-d5e8-4abd-adc9-da7bdc7f3597_trim_bitmap /dev/hugepages/ftl_c69e99e1-d5e8-4abd-adc9-da7bdc7f3597_trim_log /dev/hugepages/ftl_c69e99e1-d5e8-4abd-adc9-da7bdc7f3597_trim_md /dev/hugepages/ftl_c69e99e1-d5e8-4abd-adc9-da7bdc7f3597_vmap 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:41:05.405 00:41:05.405 real 2m54.159s 00:41:05.405 user 2m42.294s 00:41:05.405 sys 0m13.580s 00:41:05.405 ************************************ 00:41:05.405 END TEST ftl_restore_fast 00:41:05.405 ************************************ 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:05.405 12:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:41:05.405 12:43:14 ftl -- common/autotest_common.sh@1142 -- # return 0 00:41:05.405 Process with pid 79604 is not found 00:41:05.405 12:43:14 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:41:05.405 12:43:14 ftl -- ftl/ftl.sh@14 -- # killprocess 79604 00:41:05.405 12:43:14 ftl -- common/autotest_common.sh@948 -- # '[' -z 79604 ']' 00:41:05.405 12:43:14 ftl -- common/autotest_common.sh@952 -- # kill -0 79604 00:41:05.405 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (79604) - No such process 00:41:05.405 12:43:14 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 79604 is not found' 00:41:05.405 12:43:14 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:41:05.405 12:43:14 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=88893 00:41:05.405 12:43:14 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:05.405 12:43:14 ftl -- ftl/ftl.sh@20 -- # waitforlisten 88893 00:41:05.405 12:43:14 ftl -- common/autotest_common.sh@829 -- # '[' -z 88893 ']' 00:41:05.405 12:43:14 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:05.405 12:43:14 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:41:05.405 12:43:14 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:05.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:05.405 12:43:14 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:41:05.405 12:43:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:41:05.663 [2024-07-10 12:43:14.894745] Starting SPDK v24.09-pre git sha1 968224f46 / DPDK 24.03.0 initialization... 00:41:05.663 [2024-07-10 12:43:14.895194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88893 ] 00:41:05.663 [2024-07-10 12:43:15.091409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:05.940 [2024-07-10 12:43:15.356856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:06.874 12:43:16 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:41:06.874 12:43:16 ftl -- common/autotest_common.sh@862 -- # return 0 00:41:06.874 12:43:16 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:41:07.438 nvme0n1 00:41:07.438 12:43:16 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:41:07.438 12:43:16 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:41:07.438 12:43:16 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:41:07.438 12:43:16 ftl -- ftl/common.sh@28 -- # stores=ce8528b3-773e-42f2-8d49-ed35d174ad86 00:41:07.438 12:43:16 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:41:07.438 12:43:16 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce8528b3-773e-42f2-8d49-ed35d174ad86 00:41:07.695 12:43:17 ftl -- ftl/ftl.sh@23 -- # killprocess 88893 00:41:07.695 12:43:17 ftl -- common/autotest_common.sh@948 -- # '[' -z 88893 ']' 00:41:07.695 12:43:17 ftl -- common/autotest_common.sh@952 -- # kill -0 88893 00:41:07.695 12:43:17 ftl -- common/autotest_common.sh@953 -- # uname 00:41:07.695 12:43:17 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:41:07.695 12:43:17 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88893 00:41:07.695 killing process with pid 88893 00:41:07.695 12:43:17 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:41:07.695 12:43:17 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:41:07.695 12:43:17 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88893' 00:41:07.695 12:43:17 ftl -- common/autotest_common.sh@967 -- # kill 88893 00:41:07.695 12:43:17 ftl -- common/autotest_common.sh@972 -- # wait 88893 00:41:10.983 12:43:19 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:41:10.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:10.983 Waiting for block devices as requested 00:41:10.983 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:41:10.983 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:41:10.983 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:41:11.240 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:41:16.509 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:41:16.509 12:43:25 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:41:16.509 Remove shared memory files 00:41:16.509 12:43:25 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:41:16.509 12:43:25 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:41:16.509 12:43:25 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:41:16.509 12:43:25 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:41:16.509 12:43:25 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:41:16.509 12:43:25 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:41:16.509 ************************************ 00:41:16.509 END TEST ftl 00:41:16.509 ************************************ 00:41:16.509 00:41:16.509 real 13m58.003s 00:41:16.509 user 16m20.134s 00:41:16.509 sys 1m40.414s 00:41:16.509 12:43:25 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:16.509 12:43:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:41:16.509 12:43:25 -- common/autotest_common.sh@1142 -- # return 0 00:41:16.509 12:43:25 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:41:16.509 12:43:25 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:41:16.509 12:43:25 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:41:16.509 12:43:25 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:41:16.509 12:43:25 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:41:16.509 12:43:25 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:41:16.509 12:43:25 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:41:16.509 12:43:25 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:41:16.509 12:43:25 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:41:16.509 12:43:25 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:41:16.509 12:43:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:41:16.509 12:43:25 -- common/autotest_common.sh@10 -- # set +x 00:41:16.509 12:43:25 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:41:16.509 12:43:25 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:41:16.509 12:43:25 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:41:16.509 12:43:25 -- common/autotest_common.sh@10 -- # set +x 00:41:18.412 INFO: APP EXITING 00:41:18.412 INFO: killing all VMs 00:41:18.412 INFO: killing vhost app 00:41:18.412 INFO: EXIT DONE 00:41:18.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:19.236 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:41:19.236 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:41:19.236 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:41:19.236 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:41:19.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:20.060 Cleaning 00:41:20.060 Removing: /var/run/dpdk/spdk0/config 00:41:20.060 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:20.060 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:20.060 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:20.060 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:20.060 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:20.060 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:20.060 Removing: /var/run/dpdk/spdk0 00:41:20.060 Removing: /var/run/dpdk/spdk_pid61842 00:41:20.060 Removing: /var/run/dpdk/spdk_pid62102 00:41:20.060 Removing: /var/run/dpdk/spdk_pid62334 00:41:20.060 Removing: /var/run/dpdk/spdk_pid62444 00:41:20.060 Removing: /var/run/dpdk/spdk_pid62511 00:41:20.060 Removing: /var/run/dpdk/spdk_pid62650 00:41:20.060 Removing: /var/run/dpdk/spdk_pid62673 00:41:20.319 Removing: /var/run/dpdk/spdk_pid62866 00:41:20.319 Removing: /var/run/dpdk/spdk_pid62990 00:41:20.319 Removing: /var/run/dpdk/spdk_pid63102 00:41:20.319 Removing: /var/run/dpdk/spdk_pid63227 00:41:20.319 Removing: /var/run/dpdk/spdk_pid63339 00:41:20.319 Removing: /var/run/dpdk/spdk_pid63385 00:41:20.319 Removing: /var/run/dpdk/spdk_pid63428 00:41:20.319 Removing: /var/run/dpdk/spdk_pid63502 00:41:20.319 Removing: /var/run/dpdk/spdk_pid63626 00:41:20.319 Removing: /var/run/dpdk/spdk_pid64066 00:41:20.319 Removing: /var/run/dpdk/spdk_pid64152 00:41:20.319 Removing: /var/run/dpdk/spdk_pid64237 00:41:20.319 Removing: /var/run/dpdk/spdk_pid64253 00:41:20.319 Removing: /var/run/dpdk/spdk_pid64418 00:41:20.319 Removing: /var/run/dpdk/spdk_pid64434 00:41:20.319 Removing: /var/run/dpdk/spdk_pid64595 00:41:20.319 Removing: /var/run/dpdk/spdk_pid64622 00:41:20.319 Removing: /var/run/dpdk/spdk_pid64697 00:41:20.319 Removing: /var/run/dpdk/spdk_pid64721 00:41:20.319 Removing: /var/run/dpdk/spdk_pid64789 00:41:20.319 Removing: /var/run/dpdk/spdk_pid64814 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65001 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65043 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65124 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65205 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65247 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65325 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65372 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65424 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65469 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65517 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65569 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65616 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65663 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65714 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65762 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65814 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65855 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65907 00:41:20.319 Removing: /var/run/dpdk/spdk_pid65959 00:41:20.319 Removing: /var/run/dpdk/spdk_pid66006 00:41:20.319 Removing: /var/run/dpdk/spdk_pid66052 00:41:20.319 Removing: /var/run/dpdk/spdk_pid66099 00:41:20.319 Removing: /var/run/dpdk/spdk_pid66154 00:41:20.319 Removing: /var/run/dpdk/spdk_pid66209 00:41:20.319 Removing: /var/run/dpdk/spdk_pid66250 00:41:20.319 Removing: /var/run/dpdk/spdk_pid66303 00:41:20.319 Removing: /var/run/dpdk/spdk_pid66390 00:41:20.319 Removing: /var/run/dpdk/spdk_pid66512 00:41:20.319 Removing: /var/run/dpdk/spdk_pid66685 00:41:20.319 Removing: /var/run/dpdk/spdk_pid66784 00:41:20.319 Removing: /var/run/dpdk/spdk_pid66833 00:41:20.319 Removing: /var/run/dpdk/spdk_pid67284 00:41:20.319 Removing: /var/run/dpdk/spdk_pid67383 00:41:20.319 Removing: /var/run/dpdk/spdk_pid67504 00:41:20.319 Removing: /var/run/dpdk/spdk_pid67562 00:41:20.319 Removing: /var/run/dpdk/spdk_pid67601 00:41:20.319 Removing: /var/run/dpdk/spdk_pid67678 00:41:20.319 Removing: /var/run/dpdk/spdk_pid68332 00:41:20.319 Removing: /var/run/dpdk/spdk_pid68374 00:41:20.578 Removing: /var/run/dpdk/spdk_pid68863 00:41:20.578 Removing: /var/run/dpdk/spdk_pid68967 00:41:20.578 Removing: /var/run/dpdk/spdk_pid69088 00:41:20.578 Removing: /var/run/dpdk/spdk_pid69146 00:41:20.578 Removing: /var/run/dpdk/spdk_pid69178 00:41:20.578 Removing: /var/run/dpdk/spdk_pid69209 00:41:20.578 Removing: /var/run/dpdk/spdk_pid71078 00:41:20.578 Removing: /var/run/dpdk/spdk_pid71232 00:41:20.578 Removing: /var/run/dpdk/spdk_pid71241 00:41:20.579 Removing: /var/run/dpdk/spdk_pid71253 00:41:20.579 Removing: /var/run/dpdk/spdk_pid71303 00:41:20.579 Removing: /var/run/dpdk/spdk_pid71307 00:41:20.579 Removing: /var/run/dpdk/spdk_pid71319 00:41:20.579 Removing: /var/run/dpdk/spdk_pid71364 00:41:20.579 Removing: /var/run/dpdk/spdk_pid71368 00:41:20.579 Removing: /var/run/dpdk/spdk_pid71380 00:41:20.579 Removing: /var/run/dpdk/spdk_pid71425 00:41:20.579 Removing: /var/run/dpdk/spdk_pid71429 00:41:20.579 Removing: /var/run/dpdk/spdk_pid71441 00:41:20.579 Removing: /var/run/dpdk/spdk_pid72809 00:41:20.579 Removing: /var/run/dpdk/spdk_pid72920 00:41:20.579 Removing: /var/run/dpdk/spdk_pid74335 00:41:20.579 Removing: /var/run/dpdk/spdk_pid75692 00:41:20.579 Removing: /var/run/dpdk/spdk_pid75812 00:41:20.579 Removing: /var/run/dpdk/spdk_pid75927 00:41:20.579 Removing: /var/run/dpdk/spdk_pid76043 00:41:20.579 Removing: /var/run/dpdk/spdk_pid76186 00:41:20.579 Removing: /var/run/dpdk/spdk_pid76272 00:41:20.579 Removing: /var/run/dpdk/spdk_pid76419 00:41:20.579 Removing: /var/run/dpdk/spdk_pid76795 00:41:20.579 Removing: /var/run/dpdk/spdk_pid76841 00:41:20.579 Removing: /var/run/dpdk/spdk_pid77307 00:41:20.579 Removing: /var/run/dpdk/spdk_pid77492 00:41:20.579 Removing: /var/run/dpdk/spdk_pid77597 00:41:20.579 Removing: /var/run/dpdk/spdk_pid77712 00:41:20.579 Removing: /var/run/dpdk/spdk_pid77771 00:41:20.579 Removing: /var/run/dpdk/spdk_pid77802 00:41:20.579 Removing: /var/run/dpdk/spdk_pid78094 00:41:20.579 Removing: /var/run/dpdk/spdk_pid78164 00:41:20.579 Removing: /var/run/dpdk/spdk_pid78251 00:41:20.579 Removing: /var/run/dpdk/spdk_pid78654 00:41:20.579 Removing: /var/run/dpdk/spdk_pid78806 00:41:20.579 Removing: /var/run/dpdk/spdk_pid79604 00:41:20.579 Removing: /var/run/dpdk/spdk_pid79739 00:41:20.579 Removing: /var/run/dpdk/spdk_pid79944 00:41:20.579 Removing: /var/run/dpdk/spdk_pid80052 00:41:20.579 Removing: /var/run/dpdk/spdk_pid80377 00:41:20.579 Removing: /var/run/dpdk/spdk_pid80627 00:41:20.579 Removing: /var/run/dpdk/spdk_pid81008 00:41:20.579 Removing: /var/run/dpdk/spdk_pid81214 00:41:20.579 Removing: /var/run/dpdk/spdk_pid81351 00:41:20.579 Removing: /var/run/dpdk/spdk_pid81422 00:41:20.579 Removing: /var/run/dpdk/spdk_pid81554 00:41:20.579 Removing: /var/run/dpdk/spdk_pid81596 00:41:20.579 Removing: /var/run/dpdk/spdk_pid81667 00:41:20.579 Removing: /var/run/dpdk/spdk_pid81863 00:41:20.579 Removing: /var/run/dpdk/spdk_pid82096 00:41:20.579 Removing: /var/run/dpdk/spdk_pid82514 00:41:20.579 Removing: /var/run/dpdk/spdk_pid82940 00:41:20.579 Removing: /var/run/dpdk/spdk_pid83364 00:41:20.838 Removing: /var/run/dpdk/spdk_pid83822 00:41:20.838 Removing: /var/run/dpdk/spdk_pid83963 00:41:20.838 Removing: /var/run/dpdk/spdk_pid84057 00:41:20.838 Removing: /var/run/dpdk/spdk_pid84673 00:41:20.838 Removing: /var/run/dpdk/spdk_pid84748 00:41:20.838 Removing: /var/run/dpdk/spdk_pid85164 00:41:20.838 Removing: /var/run/dpdk/spdk_pid85523 00:41:20.838 Removing: /var/run/dpdk/spdk_pid85992 00:41:20.838 Removing: /var/run/dpdk/spdk_pid86110 00:41:20.838 Removing: /var/run/dpdk/spdk_pid86175 00:41:20.838 Removing: /var/run/dpdk/spdk_pid86239 00:41:20.838 Removing: /var/run/dpdk/spdk_pid86306 00:41:20.838 Removing: /var/run/dpdk/spdk_pid86372 00:41:20.838 Removing: /var/run/dpdk/spdk_pid86576 00:41:20.838 Removing: /var/run/dpdk/spdk_pid86664 00:41:20.838 Removing: /var/run/dpdk/spdk_pid86733 00:41:20.838 Removing: /var/run/dpdk/spdk_pid86822 00:41:20.838 Removing: /var/run/dpdk/spdk_pid86857 00:41:20.838 Removing: /var/run/dpdk/spdk_pid86935 00:41:20.838 Removing: /var/run/dpdk/spdk_pid87109 00:41:20.838 Removing: /var/run/dpdk/spdk_pid87345 00:41:20.838 Removing: /var/run/dpdk/spdk_pid87718 00:41:20.838 Removing: /var/run/dpdk/spdk_pid88125 00:41:20.838 Removing: /var/run/dpdk/spdk_pid88506 00:41:20.838 Removing: /var/run/dpdk/spdk_pid88893 00:41:20.838 Clean 00:41:20.838 12:43:30 -- common/autotest_common.sh@1451 -- # return 0 00:41:20.838 12:43:30 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:41:20.838 12:43:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:20.838 12:43:30 -- common/autotest_common.sh@10 -- # set +x 00:41:20.838 12:43:30 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:41:20.838 12:43:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:41:20.838 12:43:30 -- common/autotest_common.sh@10 -- # set +x 00:41:21.097 12:43:30 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:41:21.097 12:43:30 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:41:21.097 12:43:30 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:41:21.097 12:43:30 -- spdk/autotest.sh@391 -- # hash lcov 00:41:21.097 12:43:30 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:41:21.097 12:43:30 -- spdk/autotest.sh@393 -- # hostname 00:41:21.097 12:43:30 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:41:21.097 geninfo: WARNING: invalid characters removed from testname! 00:41:53.230 12:43:57 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:53.230 12:44:01 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:53.797 12:44:03 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:56.330 12:44:05 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:58.235 12:44:07 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:00.774 12:44:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:02.676 12:44:11 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:02.676 12:44:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:02.676 12:44:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:42:02.676 12:44:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:02.676 12:44:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:02.676 12:44:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.676 12:44:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.676 12:44:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.676 12:44:12 -- paths/export.sh@5 -- $ export PATH 00:42:02.676 12:44:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.676 12:44:12 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:42:02.676 12:44:12 -- common/autobuild_common.sh@444 -- $ date +%s 00:42:02.676 12:44:12 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720615452.XXXXXX 00:42:02.676 12:44:12 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720615452.jK19f1 00:42:02.676 12:44:12 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:42:02.676 12:44:12 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:42:02.676 12:44:12 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:42:02.676 12:44:12 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:42:02.676 12:44:12 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:42:02.676 12:44:12 -- common/autobuild_common.sh@460 -- $ get_config_params 00:42:02.676 12:44:12 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:42:02.676 12:44:12 -- common/autotest_common.sh@10 -- $ set +x 00:42:02.676 12:44:12 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:42:02.676 12:44:12 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:42:02.676 12:44:12 -- pm/common@17 -- $ local monitor 00:42:02.676 12:44:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:02.676 12:44:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:02.676 12:44:12 -- pm/common@21 -- $ date +%s 00:42:02.676 12:44:12 -- pm/common@25 -- $ sleep 1 00:42:02.676 12:44:12 -- pm/common@21 -- $ date +%s 00:42:02.676 12:44:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720615452 00:42:02.676 12:44:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720615452 00:42:02.676 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720615452_collect-vmstat.pm.log 00:42:02.676 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720615452_collect-cpu-load.pm.log 00:42:03.611 12:44:13 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:42:03.869 12:44:13 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:42:03.869 12:44:13 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:42:03.869 12:44:13 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:42:03.869 12:44:13 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:42:03.869 12:44:13 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:42:03.869 12:44:13 -- spdk/autopackage.sh@19 -- $ timing_finish 00:42:03.869 12:44:13 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:03.869 12:44:13 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:42:03.869 12:44:13 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:42:03.869 12:44:13 -- spdk/autopackage.sh@20 -- $ exit 0 00:42:03.869 12:44:13 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:42:03.869 12:44:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:42:03.869 12:44:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:42:03.869 12:44:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:03.869 12:44:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:42:03.869 12:44:13 -- pm/common@44 -- $ pid=90622 00:42:03.869 12:44:13 -- pm/common@50 -- $ kill -TERM 90622 00:42:03.869 12:44:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:03.869 12:44:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:42:03.869 12:44:13 -- pm/common@44 -- $ pid=90624 00:42:03.869 12:44:13 -- pm/common@50 -- $ kill -TERM 90624 00:42:03.869 + [[ -n 5141 ]] 00:42:03.869 + sudo kill 5141 00:42:03.877 [Pipeline] } 00:42:03.890 [Pipeline] // timeout 00:42:03.895 [Pipeline] } 00:42:03.913 [Pipeline] // stage 00:42:03.917 [Pipeline] } 00:42:03.934 [Pipeline] // catchError 00:42:03.945 [Pipeline] stage 00:42:03.947 [Pipeline] { (Stop VM) 00:42:03.961 [Pipeline] sh 00:42:04.260 + vagrant halt 00:42:07.541 ==> default: Halting domain... 00:42:14.142 [Pipeline] sh 00:42:14.417 + vagrant destroy -f 00:42:17.693 ==> default: Removing domain... 00:42:17.957 [Pipeline] sh 00:42:18.232 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:42:18.305 [Pipeline] } 00:42:18.325 [Pipeline] // stage 00:42:18.329 [Pipeline] } 00:42:18.343 [Pipeline] // dir 00:42:18.350 [Pipeline] } 00:42:18.368 [Pipeline] // wrap 00:42:18.373 [Pipeline] } 00:42:18.386 [Pipeline] // catchError 00:42:18.393 [Pipeline] stage 00:42:18.395 [Pipeline] { (Epilogue) 00:42:18.406 [Pipeline] sh 00:42:18.722 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:42:24.001 [Pipeline] catchError 00:42:24.003 [Pipeline] { 00:42:24.019 [Pipeline] sh 00:42:24.301 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:42:24.301 Artifacts sizes are good 00:42:24.314 [Pipeline] } 00:42:24.337 [Pipeline] // catchError 00:42:24.352 [Pipeline] archiveArtifacts 00:42:24.360 Archiving artifacts 00:42:24.480 [Pipeline] cleanWs 00:42:24.528 [WS-CLEANUP] Deleting project workspace... 00:42:24.528 [WS-CLEANUP] Deferred wipeout is used... 00:42:24.544 [WS-CLEANUP] done 00:42:24.546 [Pipeline] } 00:42:24.566 [Pipeline] // stage 00:42:24.572 [Pipeline] } 00:42:24.590 [Pipeline] // node 00:42:24.597 [Pipeline] End of Pipeline 00:42:24.646 Finished: SUCCESS